How stable is NSubstitute? - unit-testing

My company is looking to standardize on an Isolation Framework. I was looking at MS Stubs (cause Moles seemed cool and I thought I would keep it in the same framework). However, Stubs is not quite ready for prime time yet (it is still a bit buggy in normal functionality).
So now I am looking at what else is out there. I have looked at Moq and Rhino mocks. While doing that, I came across the fabulous comparisons done by Richard Banks. In that he shows NSubstitute. I really like what I see there.
However, after having been burned a bit by MS Stubs, I don't want to bet on an alpha/non-production ready Isolation Framework.
So, is NSubstitute ready for prime time? Or is it still a bit buggy?

There is a discussion about this on the official NSubstitute group.
The alpha tag was originally used to indicate that the API was still subject to change. The API has now stabilised, and most of the outstanding work for a 1.0 release is documentation. You can get an indication of some other planned work (for both v1 and v2; mostly planned features) from the issue log.
We have been using NSubstitute on a major project with a team of 6 developers and are very happy with it.
Note: I work on NSubstitute, so may be a tad biased. :)
UPDATE: NSubstitute 1.0 has been released.

I've been using NSubstitute on my project and I haven't had any problems. I picked NSubstitue mainly just to try it out, because I like the syntax and how easy it is to fake an implementation due to it being loose. Since I wasn't sure if I would keep using it at the time, I put a little wrapper around it for my own little DSL, whenever I need a fake.
Also to note, I mainly develop C# on Linux with Ubuntu, Mono, and MonoDevelop and I haven't had any problems with it working under the Mono 2.6.7 runtime. You can probably use any of the 2.6.* runtimes, but I haven't tried it. The Mono 2.6.* runtime is equivalent to the .NET 3.5 Framework.
It's everything they were hoping, with their phrase from the website: It's meant to be simple, succinct and pleasant to use.

We were using Rhino Mocks but we have replaced all our mocking code with NSubstitute. It is very stable, far easier to work with, away less code required to do what you have got to do and a small, concise but effective API.
Would strongly recommend it!

Related

What is the current state of BDD in C++?

So I found a few older questions asking about BDD frameworks for C++. CppSpec was recommended as a BDD-style framework, but the framework is not nearly as elegant as RSpec or even googletest.
I also saw mentioning of an article detailing Unit Testing C and C++ with Ruby and RSpec which sounded really interesting. However, the article states that there are a lot of limitations with using this method with C++. Has this gotten any better? If not with Ruby, has SWIG become better at interfacing C++ and Python? Could I then attach something like Cucumber?
The last thing that occurred to me was to use googlemock together with googletest (which I'm already using some for unit testing), though it still doesn't seem as elegant or quick as using Ruby or Python BDD frameworks.
I think the key to making BDD/TDD work is that writing tests should be quick and painless. I'm trying to introduce these and other development methods at work and I may need to convince people that writing tests can be short, sweet, and easy.
Update
I just found out about Kross, which might work well because the application uses Qt and targets a Linux environment. Could this potentially be easier/better than SWIG?
Have you taken a look att Igloo?
We don't have nearly as many features as for instance googletest, but we created it with the intention that you shouldn't have to repeat yourself, and we took some inspiration from RSpec and NUnit, and tried to create something pleasant.
Disclaimer: If it's not obvious already, I'm one of the developers behind Igloo.

What documentation (, links and advices) could you offer me to create a testing library?

I'm thinking of designing my own Test library (framework) ->in c++
I am wondering if some of you have already designed there own (and what good advices, documentation they could offer me), decided to not do that (and why), What critics (and argue) you have against differents existing testing frameworks.
I want to no more about testing framework design.
In fact I have some pretty differents things to test :
simple unit test
MVC and signal slot
data, (escpecially for audio and DSP)
performance
compatibility
"So much things ... and so few time "
No really I need to test a lot of different things.
So I checkout how is designed XUnit, and the Addison and W XUnit related book, also the Advanced Unit Testing related article on code project....
And different articles, discuss this with coworkers ...
And at the end, I want to design my own.
Why :
specific needs,
like the do it yourself way (and learn why it's done this way in existing frameworks and that I'm not a genious ... ^^)
Thank you all.
I remember having read some discussions about Cppunit 2 design on the sourceforge wiki. I'd start from here. Also, Noel Llopis explored the C++ unit-testing framework jungle.
But, you're saying you want to re-create another framework and you only have few time left. I'd suggest picking one framework fitting your need for the unit tests, see if it can be used fore your MVC and data testing. Moreover unit testing framework are not designed to run performance tests. I'd recommend following the Unix philosophy here: simple little tools that do one thing and do it well.
Learn at least one existing framwork before you implement your own. My experience is that the framework is not the problem. Learning how to write good unit tests is the hard part.
I have used several framworks through the years including CxxTest, CppUnitLite and UnitTest++. But my recommendation is Google Test together with Google Mock (Google Mock comes with a copy of Google Test bundled).

How to write unit test and not get bored in development of FOSS project?

I'm developing cross-platform project that would support :
Four C++ compilers - GCC, MSVC, SunStudio, Intel,
Five Operating Systems: Linux, OpenSolaris, FreeBSD, Windows, Mac OS X.
I totally understand that without proper unit testing there is no chance to perform proper QA on all these platforms.
However, as you all know writing unit tests is extremely boring and slow down development process (because it is boring and development of FOSS software shouldn't be such)
How do you manage to write good unit-testing code and not stop writing code.
If you at least get salary for this, you can say - at least I get something for this, but if you don't, this is much harder!
Clarification:
I understand that TDD should be the key, but TDD has following very strict restrictions:
You have exact specifications.
You have fully defined API.
This is true for project that is developed in customer-provider style, but it can't be done for project that evolves.
Sometimes to decide what feature do I need, I have to create something and understand if it works well, if API is suitable and helps me or it is ugly and does not satisfy me.
I see the development process more like evolution, less development according to specifications. Because when I begin implementing some feature, sometimes I do not know if
it would work well and what model would it use.
This is quite different style of development that contradicts TDD.
On the other hand, support of wide range of systems requires unit tests to make sure that
existing code works on various platform and if I want to support new one I only need to
compile the code and run tests.
I suggest to do no unit test at all. Work a bit on the project and see where it leads. If you cannot put enough motivation into doing the obviously right thing then work a bit on your problem do some refactoring, some bug fixing and multiple releases. If you then see what kinds of problems pop up think of TDD as one of the possible tools to solve them.
The problems can be
low quality
high bug fixing costs
reluctance to refactor (i.e. fear to change existing code)
suboptimal APIs (APIs are used to late to change)
high testing costs (i.e. manuall testing)
There is a big difference between theoretically knowing that unit testing and test first are the right approaches and experiencing the pain and learning from that experience. Motivation will come with this experience.
TDD is not a panacea. It can be implemented in a horrible fashion. It should not become a check box in your project check list.
Personally, I don't find testing boring. It's the first time I get to see my code actually run and find out whether it works or not!
Without some form of test program to run the new code directly, I wouldn't get to see it run until after I've built a user interface and wired it all together to make the new bits available through the UI and then, when it doesn't work the first time, I have to try to debug the new code, plus the UI, plus the glue that holds them together and dear god, I don't even know what layer the bug is in, never mind trying to identify the actual offending code. And even that much is assuming I still remember what I was working on before I went off on an excursion into UI-land.
A proper test harness bypasses all that and lets me just call the new code, localize any bugs to the tested section of code so they can be found quickly and fixed easily, see that it produces the right results, get my "it works!" rush, and move on to the next bit of code and my next rush of reward as quickly as possible.
write them jumping from unit tests to code to unit test to code... and so on.
Unit tests should follow all the best practices of production code, such as the DRY principle. If you get bored writing unit tests, you will also get bored writing production code.
Test-Driven Development (TDD) may help you, though, as you constantly switch back and forth between writing a unit test and then a bit of production code.
As others have told you: writing the tests first makes it fun. Your statements that it can't be done for a project that evolves need to be reconsidered. Actually the opposite is true. If you are going the agile route, you are highly discouraged to define everything up front. TDD fits in a paradigm that this is impossible and that change will happen. A book that makes this very clear, and gives examples of this is applying uml and patterns.
Try using TDD (Test Driven Development) - instead of writing your tests after the actual coding was done write them before and let them drive your design.
Due to the nature of the project a fair amount of automation is required - find a way to write the test once for one OS/compiler and then run it for all of the other options available.
Personally, I find writing code that I know works is quite exhilarating. But if you don't want to be bored writing unit tests then you'll need to cultivate a fascination for debugging.
To be serious, if you think that writing unit tests is boring and slow, you really need to re-address how you write them. I suggest you investigate using Test Driven Development. Write the tests in the programming language and run them automatically. Use the feedback from the tests to shape your code.
There are Test First frameworks for pretty much any language you care to mention, inspired by Kent Beck and Erich Gamma's work with JUnit. The Wikipedia article on TDD has more info, including a helpful link to a list of frameworks organized by language. Find out more.

Unit testing in C++

I've been reading a lot about Unit tests and Test Driven developemnt.
Recently, I also read java unit test code.
I however, prefer to develop in Qt. So I googled up "unit testing in c++" and found a host of information about various unit testing frameworks available for C++.
However, I could not find a reliable comparison of the the various frameworks.
So I look to the SO community to guide me through the selection of what may the "best" unit testing framework for c++.
Also, if anybody had specific comments regarding TDD in Qt (especially using Qt-Creator), then they are more than welcome.
Usually use Boost, but if you are using Qt, their QtTestLib might be the better choice.
I would recommend doctest (created by me) - it's the lightest on compile times from all the popular testing frameworks. It is also a direct competitor to Catch which is currently the most used framework - checkout the differences in the FAQ
This seems too be the same question as:
Unit testing in C++ which is actually c++ despite the URL title.
From there, they link to two more SO questions which should help:
Unit testing for C++ code - Tools and methodology
C++ unit testing framework
There is a table comparing all (?) the C++ unit test frameworks available from wikipedia.
There also is an old comparison of C++ unit test frameworks available. I do not think it has not been updated so I mention it as a complement as it's more argumented than the table. It covers, CppUnit, CppUnitLite, Boost.Test, NanoCppUnit, Unit++, CxxTest, especially it does not cover Google C++ framework.
The "xUnit" family of testing frameworks is usually pretty solid (jUnit, NUnit, etc.). I haven't used it myself, but there is a port of jUnit for C++:
http://sourceforge.net/projects/cppunit
Boost is usually a good choice, and it contains a testing framework, the Boost Test Library. I have used it for small test cases and it did what I expected, but I haven't used it extensively like in TTD.
If you want to get off the ground quickly without figuring out how to build a library, there is a single header file include solution, which supports fixtures (setup and teardown), the usual TEST() {} with CHECK_TRUE, etc.
It also has memory leak detection and performance testing capabilities.
https://gitlab.com/cppocl/unit_test_framework

Using unit tests as a "functionality contract"

Unit tests are often deployed with software releases to validate the install - i.e. do the install, run the tests and if they pass then the install is good.
I'm about to embark on a project that will involve delivering prototype software library releases to customers. The unit tests will be delivered as part of each release and in addition to using the tests to validate the install, I plan on using the unit tests that test the API as a "contract" for how the release should be used. If the user uses the release in a similar manner to how it is used by the unit tests then great. If they use it some other way then all bets are off.
Has anybody tried this before? Any thoughts on whether this is a good/bad idea?
Edit: To highlight a good point raised by ChrisA and Dan in replies below, the "unit tests that test the API" are better called the integration tests and their intent is to exercise the API and the software to demonstrate the functionality of the software from a customer perspective.
Sounds like a good idea to me. I (we all?) routinely use unit tests internally to do just that. In using my unit tests to validate that I haven't broken anything I'm also implicitly verifying that my API contract hasn't changed. It seems like a natural usage of unit tests to deploy them in the fashion you're talking about.
Agile methodologies say: Tests are specifications, so this is a very good idea.
I fully expect to be flamed for this, but I don't understand how a set of unit tests proves anything at all about the kind of things a customer cares about, namely whether the application meets his business requirements.
Here's an example: I've just finished converting a chunk of code to fix a big mistake we made. It was a classic case of over-engineering, and the changes have touched about a dozen windows forms and about as many classes.
It's taken me a couple of days, it's now a lot simpler, we gained some features for free, and we lost a ton of code that did stuff that we now know we never really needed.
Every single one of those forms worked perfectly before. The public methods did exactly what they needed to do, and the underlying data accesses were just fine.
So any unit test would have passed.
Except, sadly, they did the wrong thing - which we didn't realise, except in retrospect. It's as if we'd built a prototype and only after trying to use it, realised that it wasn't right.
So now we have a leaner, meaner, fitter application.
But the things that were wrong, were wrong at a level where unit tests could never have revealed them, so I'm just not understanding how shipping a set of unit tests with an install does anything except give a false sense of security.
Maybe I'm not understanding something, but it seems to me that unless the thing that is shipped functions at the same level as the tests supplied, they prove nothing.
It's actually a pretty good idea, and extremely pleasant as an API user.
This technique can actually also be used the other way round : when you're using a "legacy" API, you can use unit tests to document the way you think the API behaves and to validate that it actually behaves as planned.
If you are releasing a code library, this sounds great.
If you are releasing an ordinary software product with which your users will interact only via a GUI, your unit tests may not be working at the same level of abstraction or may not the most useful tool to assess the behaviour of your product. A really good user manual (yes, this is possible) might be better for that.
If you're interested in providing a set of specifications with your code, perhaps you should investigate some of the behavior-driven development tools (nbehave, jbehave, rspec, etc.). These frameworks provide support for describing your tests in given/when/then syntax and outputting formatted results that are in a natural language. See nbehave for an example of a BDD tool for .NET. You can find an excellent description of BDD here
Another option may be for you to write tests using an acceptance testing framework such as fit or fitnesse (or the java-only concordion) and deliver these acceptance tests with the code. Both fit/fitnesse and concordion allow specification of the tests in plain HTML or even Word documents.
The benefit of either approach (BDD or acceptance testing frameworks) is that the results the user sees are more human-readable and understandable.
Tests will check requirements.
Requirements define functionality
=> Tests will check functionality.
The problem is, that only functionality can be checked that can be covered by unit tests. Integration or whole system tests won't work.
Otherwise, it's the main approach of TDD to check functionality via unit tests.
Meszaros calls this "Tests as documentation"