Maven learning curve & overhead for small/medium projects? - c++

what would be (rough estimation, average, of course) the initial learning and setup curve and subsequent overhead for using Maven for C++/Eclipse/Linux project of small to medium size?
We are 4 developers at the beginning of the way. We currently have ~20 native eclipse C++ (CDT) "projects", which we compile interactively. We would like to have an automated checkout & build script.
It seems a bit overkill at this stage, but perhaps we should adopt it sooner then later, provided that it does not incur an overhead. We don't have bandwidth for extensive configuration management right now. Thanks a lot!
EDITED / DETAILED:
I realize I haven't described my needs well enough. Having read the references provided below, I see that CI tool seems an overkill for us at the moment. What I'd like to have is a build tool that is well integrated with eclipse on one hand, and allows offline, non-interactive builds on the other. I enjoy the simplicity of working with eclipse projects: you just add files, add references to internal components and 3rd part libs as they add up, and that's it. You don't need to manually maintain makefiles or the like. The trouble with it, as with MSVS a few years ago when I worked with it, is that it does not give you an option of non-interactive builds. So, does such tool exist?

First, while Maven has some support to build C++ projects with the maven-native-plugin or, if you already are using Make, with the maven-make-plugin from the c-builds suite, this is not a common use case and there aren't widely used. So while it should be possible, you won't get support and find resources easily (just Google a bit or browse the maven users list to get an idea).
Second, if you add to this that you'll have to learn Maven in the same time, then it seems reasonable to say that you are not taking the easiest path.
So, instead, I'd stick with more traditional tools and/or Ant. For the continuous integration itself, I've seen several references mentioning the use of CruseControl to build a C++ project. Refer to What continuous integration tool is best for a C++ project? or UsingCruiseControlWithCplusPlus for example. But I guess the principles are transposable to another CI engine (like Hudson that I find much more easy to use than CruiseControl).

Related

Recursive/Nestable build system?

Are there any existing build systems with the following criteria?
Nestable/Recursive. I.e. there is no "top" level build file like in CMake or (non-recursive) make or just about every other build system.
In-source builds. This is required for a build system to be cleanly nestable/recursive.
Automatic dependency scanning for many languages
Configuration files with declarative rather than imperative syntax
Configuration file Syntax supports adding arbitrary custom build rules
No IDE project generation bloat
No showstoppers for cross-platform implementation
Hash based change detection (or at least ~something) better than timestamps
Free software
Basically, I want something that is up to the task of managing software build dependencies system-wide, but is still minimalist and efficient. I want a spiritual successor to make that is adoptable by a majority of the open-source world. What comes the closest?
Tup looks interesting...
http://gittup.org/tup/
and djb redo:
https://github.com/apenwarr/redo/
and shake:
http://hackage.haskell.org/package/shake
Makepp comes close, and I have used it. For some reason it doesn't have much traction, though... Potential downsides:
Implemented in perl rather than a systems programming language, so fewer devs interested in hacking on it?
Not in apt, so harder to take seriously
Inherits syntax from Make; doesn't really enforce ONE correct way of doing things
Slow builds
Subdirectories don't inherit information from parents
The ninja infrastructure looks interesting. It isn't a standalone build system, but maybe it will catalyze someone into writing a very elegant front-end. And perhaps CMake supporting it as a back-end will speed it's adoption.
http://martine.github.com/ninja/

is there an API for GIT (C++ or other languages)

A company asked me to program a GIT wrapper for them.
The people there have absolute no versioning systems experience, but it will be incorporated in their daily routine eventually (through my program).
I'm planning on using VC++ to create a tiny windows applet that will help ppl in this process. Any thoughts on that?
What about a Deamon process checking if people want to commit/push their files?
For almost (but not all!) use cases, libgit2 is the easiest way to interact with Git repositories via code.
Git already has two layers: The plumbing (which you may be interested in) on top of which is built the primary porcelain which provides the user interface. If you want to implement something like git-commit but with slightly different semantics all of the underlying programs like git-write-tree and git-rev-parse are there for you to build on.
See also What does the term "porcelain" mean in Git?
There's already TortoiseGit, among other "friendly" interfaces. Don't re-invent the wheel, start by researching what's already available.
In order to easier the search for documentation hereafter the link to the official. It's about the plumbing and porcelain:
https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain

Can i have input for creating a Build Tool? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm a student and i want make a build tool as a side project for myself because none of the current build tools seem to attract me. There's ant, but i really dislike looking at XML (i don't know why, but i really don't see the appeal of tags; it puts me off visually and cannot be made neat in my eyes). There's maven, but i really don't want to be working on a project just to have it fail all of a sudden (this is based on the research I've done where people say that there are times that maven fails to build all of a sudden. This could just be total BS but i'd rather not find out the hard way), plus there's some XML involved. I really liked make, but it isn't portable, and even though the chances of me using a non-Windows PC are next to nil, I am unfortunately, a computer science student who has been properly trained to always assume the worst case.
Currently, i am looking into Gradle. I'm still trying to figure it out (i am a really slow learner) but i like the syntax. It might also be beneficial to note that i am so shallow that i don't really care about the back end stuff or any of the advanced features cause i don't really understand dependency management and stuff just quite yet. All i care about is that the configuration syntax or make file looks clean, that it works without requiring internet access, absolute paths, or anything of the sort, and that it works consistently and doesn't take too long. I believe that the syntax is important because, like all code, if it looks ugly, you won't want to read it.
I want to make a build tool that is simple, functional, and portable (i'm gonna be running it off my External HDD with Java in it). I'd really appreciate any suggestions this community has to offer, such as "It really is better to just stick with ant or maven cause they work already" or "be sure to avoid make's dependency issue". This will just be a side project so i can work on my Java and maybe learn something new so even if i fail, i still might learn something new so no need to comment on that.
If you haven't already tried it, try Automated Build Studio. We have used if for a couple of years now and it works fine for us.
Another existing tool you might choose to ignore is SCons. It's written in Python and has some very nice features including, possibly relevant to you, easy extensibility.
And yes, I think you are going to crash and burn. Far better to learn one of the existing tools properly than to reinvent this particular wheel. Use your creative urges and time to write something truly new.
Furthermore, and since you are a student allow me to stand on my soap-box: one of the lessons that new entrants to the IT profession have to learn is to use the tools you are given, not to expect to be allowed the time and money to develop a new one. Don't expect to be able to rock up to work on day one and try to make a case for writing 'MyMake' because GNU Make (or whatever) doesn't float your boat.
Here's just an idea:
ant is really not such a bad tool. Maybe you could consider building a front end for it that translates a DSL (Domain Specific Language) of your own design into ant's XML, and hide the XML from your tool's users?
You may also want to take a look at Rake. Someone recently ranted in his blog about how terrible Maven is (and the debate goes on) and he much, much prefers Rake.
I started one, some 7 years ago. I am still using it. No one else uses it. http://gna.org/projects/maker/
I started on some same ground than you: multi platform (was Linux, DEC Alpha, and Windows at the time) with a new shiny lanquage: Python. I was getting ideas from Miller's Recursive Make Considered Harmful in the sense that I had a source file for each executable (same for shared library, or static library) that was listing the C/C++ source files, and dependencies on other libraries. The main feature was to generate on the fly a makefile that is fed into gmake (that does a perfect job, provided it has a perfect makefile) to build all the binaries in one call, managing all the dependencies (with gcc -MD options).
Over time. it evolved mostly into a tool that can use Visual Studio C++ project and solution files to compile on Linux. And I am struggling to keep it up-to-date with new versions of Visual, and new values of project parameters and project properties that my coworkers use.
I wouldn't recommend starting a new tool.
Why would using Maven mean that your build fails "all of a sudden"?
I would advocate always using continuous integration (e.g. cruise control for ant or Hudson for maven) regardless of the build tool you use. This should eliminate a build failing "all of a sudden"
There's maven, but i really don't want to be working on a project just to have it fail all of a sudden.
What does that mean?
Anyway, if you're into Groovy, I recommend Gant.
Cobbling together your own build tool doesn't sound like a great use of your time, especially when established tools such as ant and maven exist and have such vast user bases that are finding (and fixing) bugs with those tools.
I don't understand your comment about maven causing build failure all of a sudden.
XML will haunt you everywhere you go. Invest in an excellent XML visualizer / editor, and go to town with Ant anyhow!

Common libraries in a large team

Assume you have five products, and all of them use one or more of the company's internal libraries, written by individual developers.
It sounds simple but in practice, I found it to be very difficult to maintain.
How do you deal with the following scenarios:
A developer unintentionally introduces a bug and breaks everything in production.
Every library has to mature, That means the API needs to evolve, so how do you deploy the updated version to production if every developer needs to update/test their code while they are extremely busy on other projects? Is this a resource and time issue?
Version control, deployment,and usage. Would you store this in one global location or force each project to use, say, svn:externals to "tie" a library?
I've found that it is extremely hard to come up with a good strategy. My own pet theory is this:
Each common library has to have a super-thorough set of tests or else it should never be common, even if it means someone else duplicates the effort. Duplicate untested code is better than common untested code (you break only one project).
Each common library has to have a dedicated maintainer (can be offset by a really good test suite in a smaller team).
Each project should check out the version of the library that is known to work with it. This means a developer does not have to get pulled away to update API usage, as the common code gets updated. Which it will be. Every non-trivial piece of code evolves over months and years.
Thank you for your thoughts on this!
You have a competing set of goals here. First, a library of reusable components must be open enough that people from the other projects can easily add to it (or submit components to it). If it's too difficult for them to do that, they'll build their own libraries, and ignore the common one, leading to a lot of duplicate code and wasted effort. On the other hand, you want to control the development of the library enough that you can ensure its quality.
I've been in this position. There's no easy answer. However, there are some heuristics that can help.
Treat the library as an internal project. Release it on regular intervals. Ensure that it has a well-defined release procedure, complete with unit tests and quality assurance. And, most important, release often, so that new submissions to the library show up in the product frequently.
Provide incentives for people to contribute to the library, rather than just making their own internal libraries.
Make it easy for people to contribute to the library, and make the criteria clear-cut and well-defined (e.g., new classes must come with unit tests and documentation).
Put one or two developers in charge of the library, and (IMPORTANT!) allocate time for them to work on it. A library that is treated as an afterthought will quickly become an afterthought.
In short, model the development and maintenance of your internal library after a successful open source library project.
I don't agree with this:
Duplicate untested code is better than
common untested code (you break only
one project).
If you are all equally likely to create bugs by implementing the same thing, then you'll all have to fix potentially different bugs in each instance of the "duplicate" library.
It also seems that it'd be much faster/cheaper to write the library once and, instead of having multiple other teams write the same thing, have some resources allocated to testing.
Now to solve your actual problem: I'd mimic what we do with real third-party libraries. We use a particular version until we're ready, or compelled to upgrade. I don't upgrade everything just because I can--there has to be a reason.
Once I see that reason (bug fix, new feature, etc.), then I upgrade with the risk that the new library may have new bugs or breaking changes.
So, you're library project would continue development as necessary, without impacting individual teams until they were ready to "upgrade".
You could publish releases or peg/branches/tag svn to help with all this.
If all teams have access to the bug tracker, they could easily see what known issues exist in the upgrade-candidate before they upgrade, too. Or, you could maintain that list yourself.
#Brian Clapper provides some excellent guidelines for how to run your library as a project in his answer.
I used to work in a similar situation to what you're describing, only my company had dozens of software products. I worked on the team that was responsible for maintaining and upgrading the core set of libraries that everyone else used.
We dealt with those scenarios as follows:
Test the heck out of the core libraries. Maintaining duplicate code is a nightmare. You're not just maintaining the core and one copy. Somewhere in your company's source control there are several copies of the same code. We had dozens of products, so that would have meant dozens of copies. Hunt them down and kill them.
We had a small team of 10-12 developers dedicated to maintaining the core library and its test suites. We were also responsible for fielding calls from the other 1100 developers in the company about how to use the core library, so as you can imagine, we were very busy.
Each other project needs to work with the version of the core library that it is known to work with. You can use version control branches to test new releases of the core library with old products to make sure you don't break code that works. If the core team does a thorough job of testing, this should go very smoothly. The only time this ever got really complicated for us was when the core API changed, or when we flat out screwed something up. Even if you're very confident in your core testing, use branches to test individual products.
I agree - this is difficult. In our small team (consulting .. not a product company - which made it harder), we had one common component that stood out from the others. In this case the recipe for success was:
Make a good developer responsible for developing the component
Make a good developer the gatekeeper for maintaining the component
Make sure all upgrades (there were several) are backward compatible
Make sure there is some basic documentation (or a simple reference application) explaining how the component is to be used
Make sure all developers know that the component exists (!) and where they can find it (along with the code, if they wish to review it)
Give developers the ability to review the code and suggest better implementations or refectoring, but have the final mods go through an experienced gatekeeper. When the component were upgraded, older apps did not have to upgrade. If we did a new release, we evaluated if we wanted to upgrade, and if we did, all we needed to do was swap the libraries - no code needed to change, unless we wanted to use some new features available through the upgrade. Resistance is inevitable, but sometimes it is a good sort of resistance when it comes from good developers who have better ideas for a new generation or refactored component.
Treat the development of the libraries like any other product. Each library has its own repository, its own releases and version numbers. The compiled and officially tested versions of the library are also kept in the repository. Document features and changes from version to version.
Then use the libraries like you would using third party libraries. Your product uses only fixed versions of the compiled libraries. Switch to a new version when you really need to and be aware that this involves more testing. Add the versions you use to your version control.
When you find a bug or require a new feature in a library, a new version or sub-version is created. Using a version control system like svn makes this easy. When you need the source code for debugging purposes, export it and include it in your projects, but do not change it there, but fix problems in the libraries' repositories.
This way, every team can contribute to the libraries without endangering the work of the other teams. Switching versions is done deliberately and not by accident.
Create an Anti-corruption (DDD) layer for the existing library... this is nothing but a facade.. and then write unit-test for this anti-corruption layer... Now even if someone upgrade/update the library you would know if something is broken by running the unit tests...
These tests could also serve as documentation of contract... and not every project that need to use the library has to write this anti -corruption layer, if they are using the same exact functionality..
"Duplication is the root of all evil"
Sounds to me like you need:
An artifact repository like Ivy so you can have the libraries shared and versioned with a distinction between versions that are API stable and ones that are "maturing"
Tests for the libraries
Tests for the projects using them
A continuous integration system so that when an incompatibility or bug is introduced both the project and the original library developer are notified
I think that one shared library is better than 3 duplicate ones (and 1 tested is definitely better than 3 untested). That's because when you find and fix problem, this makes the whole application area more solid (and development and maintenance are more efficient).
BTW, that't one of the reasons (apart from contributing back to the community) why our company exposes our .NET shared libraries to the public as open-source.
Plus, there's less code to write. And you can designate one dev to enforce good development practices on the library and its usages (i.e. through code contracts enforced on the unit tests within library consumers). This improves quality and and reduces maintenance costs.
We store shared libraries as binaries in the solution. That comes from the logical requirement that any solution has to be atomic and independent (this rules out svn:externals links).
API compatibility is not an issue at all. Just let your integration server rebuild and retest the whole product stack (while updating all the inner references and propagating changes) and you'll always be sure that all internal API's are solid. And whomever breaks the API has to either fix it or update the usages.
Duplication is the root of all evil
I would argue that unchecked government is the root of all evil :)
I do get a lot of flack for even suggesting that duplication should be an option. I understand why, but let me complicate this a bit.
Say you have a fairly large library that doesn't actually do anything in particular - it's just a collection of utilities. There are NO tests for this library - at all. You need only one function from it. Say, something that parses out a file's extension.
Pop quiz: do you just write something as small as this in your own project, or you bite the bullet and use the free-for-all untested set of utilities, which WILL break your application if someone breaks the function?
Also, imagine you are in environment where writing tests is not part of the culture, since most projects are very intense and have a very short development span.
Duplicating large systems - such as client registration - would be dumb beyond belief, of course. However, aren't there any cases where it is safer to duplicate something fairly small in your project if the alternative is not safe enough (no system for maintaining common code).
Think of it this way - and this happens all the time - multiple contractors working on different projects, for the same company. They don't even know about each other.
My argument is this:
If a team cannot dedicate to maintaining a solid common codebase, or if the environment does not give them enough time to, it's best to let them work as separate "contractors".
You will STILL need to use large existing systems that simply cannot be duplicated.
Duplicating large systems - such as
client registration - would be dumb
beyond belief,
That's why those systems publish external interfaces.
If you define a library as shared code between projects: in my experience that's almost always bad. A project should be stand alone, and updates for one project should not affect other projects.
Even if you start with libraries, you'll end up duplicating code anyway. Want to hotfix project 1? It was released with library 1.34, so to keep the hotfix as small as possible, you'll go back to library 1.34 and fix that. But hey-- now you did exactly waht the library was supposed to avoid-- you've duplicated the code.
Every developer uses Google to find code and copy it into his application. That's probably how they found Stack Overflow in the first place. Imagine what would happen if Stackoverflow published libraries instead of code snippets, and you'll get an idea of the problems that afflicts many well meaning library creators.
Libraries tend to be generic solutions to specific problems. Typically, the generic solution is more complex than the sum of the two specific solutions. This means you need one good programmer to solve a problem that could have been solved by two morons. Sounds like a bad tradeoff to me :D
I would like to point a problem in the solutions suggested above: treating the library as an internal project with its own versioning scheme.
The problem
If your company has more than one product (lets say two teams - two product: A, B), than each product has its own release schedule. Let's give an example: Team A is working on product A v1.6. Their release schedule is two weeks from now (suppose Oct 30th). Team B is working on product B v2.4. Their release schedule is 1.5 months from now - Nov 30th. Lets assume both are working on acme-commons-1.2-SNAPSHOT. Both are adding changes to acme-commons, as they need it. Couple of days before Oct 30th, team B introduce a change which is buggy, to acme-commons-1.2-SNAPSHOT. Team A is getting into stress mode since they discover the bug 1 day prior to code freeze.
This scenario shows that treating a common library as a third party library is almost impossible. The trivial, but problematic, solution is for each team to have their own copy of the version they are about to change. For example, product A v1.2 will create a branch (and version) for acme-commons named "1.2-A-1.6". Team B will also create a branch in acme-commons called "1.2-B-2.4". Their development will never collide and they will be stress free once they tested their product.
Of course, someone will have to merge their changes back to the original branch (lets say master or 1.2).
The problems I found with this solution is:
Branch inflation - the tree structure will be very puffy and it will be harder to understand the flow of changes/merges.
Merges back to 1.2 will probably never happen - Unless a team/developer is dedicated to this library, the chances that Team A or Team B merges their code back to 1.2 branch is slim. They will always stay focused on their tasks, thus creating and using their own branch space. Allocation of a developer/team is expensive, thus not always a viable solution.
I'm still trying to figure this one out, so any thoughts of this matter are welcome

C++ unit testing framework [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I use the Boost Test framework for my C++ code but there are two problems with it that are probably common to all C++ test frameworks:
There is no way to create automatic test stubs (by extracting public functions from selected classes for example).
You cannot run a single test - you have to run the entire 'suite' of tests (unless you create lots of different test projects I guess).
Does anyone know of a better testing framework or am I forever to be jealous of the test tools available to Java/.NET developers?
I've just pushed my own framework, CATCH, out there. It's still under development but I believe it already surpasses most other frameworks.
Different people have different criteria but I've tried to cover most ground without too many trade-offs.
Take a look at my linked blog entry for a taster. My top five features are:
Header only
Auto registration of function and method based tests
Decomposes standard C++ expressions into LHS and RHS (so you don't need a whole family of assert macros).
Support for nested sections within a function based fixture
Name tests using natural language - function/ method names are generated
It doesn't do generation of stubs - but that's a fairly specialised area. I think Isolator++ is the first tool to truly pull that off. Note that Mocking/ stubbing frameworks are usually independent of unit testing frameworks. CATCH works particularly well with mock objects as test state is not passed around by context.
It also has Objective-C bindings.
[update]
Just happened back across this answer of mine from a few years ago. Thanks for all the great comments!
Obviously Catch has developed on a lot in that time. It now has support for BDD style testing (given/ when/ then), tags, now in a single header, and loads of internal improvements and refinements (e.g. richer command line, clear and expressive output etc). A more up-to-date blog post is here.
Take a look at the Google C++ Testing Framework.
It's used by Google for all of their in-house C++ projects, so it must be pretty good.
http://googletesting.blogspot.com/2008/07/announcing-new-google-c-testing.html
http://code.google.com/p/googletest
Boost.Test does allow to run test case by name. Or test suite. Or several of them.
Boost.Test does NOT insists on implementing main, though it does make it easy to do so.
Boost.Test is NOT necessary to use as a library. It has single header variant.
I just responded to a very similar question. I ended up using Noel Llopis' UnitTest++. I liked it more than boost::test because it didn't insist on implementing the main program of the test harness with a macro - it can plug into whatever executable you create. It does suffer from the same encumbrance of boost::test in that it requires a library to be linked in. I've used CxxTest, and it does come closer than anything else in C++-land to automatically generating tests (though it requires Perl to be part of your build system to do this). C++ just does not provide the reflection hooks that the .NET languages and Java do. The MsTest tools in Visual Studio Team System - Developer's Edition will auto-generate test stubs of unmanaged C++, but the methods have to be exported from a DLL to do this, so it does not work with static libraries. Other test frameworks in the .NET world may have this ability too, but I'm not familiar with any of those. So right now we use UnitTest++ for unmanaged C++ and I'm currently deciding between MsTest and NUnit for the managed libraries.
I'm a big fan of UnitTest++, it's very lightweight, but does the job. You can run single tests there easily.
Great question! A few years ago I looked around forever for something worth using and came up short. I was looking for something that was very lightweight and did not require me to link in some libraries... you know something I could get up and running in minutes.
However, I persisted and ended up running across cxxtest.
From the website:
Doesn't require RTTI
Doesn't require member template functions
Doesn't require exception handling
Doesn't require any external libraries (including memory management, file/console I/O, graphics libraries)
Is distributed entirely as a set of header files (and a python script).
Wow... super simple! Include a header file, derive from the Test class and you're off and running. We've been using this for the past four years and I've still yet to find anything that I'm more pleased with.
Try WinUnit. It sounds excellent, and is recommended by John Robbins.
I like the Boost unit test framework, principally because it is very lightweight.
I never heard of a unit-test framework that would generate stubs. I am generally quite unconvinced by code generation, if only because it gets obsolete very quickly. Maybe it becomes useful when you have a large number of classes?
A proponent of Test Driven Development would probably say that it is fundamental that you run the whole test suite every time, as to make sure that you have not introduced a regression. If running all the tests take too much time, maybe your tests are too big, or make too many calls to CPU intensive functions that should be mocked out? If it remains a problem, a thin wrapper around the boost unit-tests should allow you to pick your tests, and would probably be quicker than learn another framework and port all your tests.
http://groups.google.com/group/googletestframework, but it's pretty new
I'm using tut-framework
Aeryn is another framework worth looking at
Visual Studio has a built-in unit testing framework, this is a great link to setting up a test project for win32 console application:
http://codeketchup.blogspot.ie/2012/12/unit-test-for-unmanaged-c-in-visual.html
If you are working on a static DLL project it is much easier to set up as other have pointed out external tesing frameworks like GTest and Boost have extra features.
CppUnit was the original homage to JUnit.
Eclipse/JUnit is a solid package for java, and there are C++ extensions/equivalents for both. It can do what you're talking about. Of course, you'd have to change IDEs...
I too am a fan of UnitTest++.
The snag is that the source distribution contains almost 40 seperate files. This is absurd. Managing the source control and build tasks for a simple project is dominated by looking after all these unit testing files.
I have modified UnitTest++ so that it can be integrated with a project by adding one .h and .cpp file. This I have dubbed "cutest". Details are at http://ravenspoint.com/blog/index.php?entry=entry080704-063557
It does not automatically generate test stubs, as asked for in the original question. I cannot help thinking that such a feature would be more trouble than it is worth, generating vast amounts of useless code "testing" accessor functions.
I would imagine automatically stubbing out test functions would be more of a function (of scripts for the framework or) the development environment in question. Supposedly CodeGear's C++Builder applications will quickly generate test code for user functions.
Andrew Marlow's Fructose library's worth checking out... http://fructose.sourceforge.net/
I recall his documents containing a fairly detailed analysis and comparison of other offering at the time he wrote Fructose, but can't find a URL direct to that document.
I'm trying out Igloo, also a header only C++ test suite, even it's two included dependencies are header only.
So, it's pretty straightforward and simple. Besides the included example on github, there's examples and more details at the main site, igloo-testing.org. I'll update this later as I get more experience with it and other frameworks.