Tools similar to Google Test for C++ unit testing? - c++

Are there any tools similar to GoogleTest for the purpose of functional testing in C++.
I plan to do them as part of Unit Testing and would like to know of other options available so that I can make an informed choice.

Take a look at this.
http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle.
And I personally use this, I think it is pretty good.
http://unittest-cpp.sourceforge.net/

You can have a look a this for a short list of frameworks that you may explore.
Also, here is why you should use Google Test, from the tutorial itself. If find GTest easy to use, test are verbose enough and documentation is clear.

If you are using Visual studio, it embdeds a Test Unit framework.
I just tried the exemple available on the MSDN site, it works pretty well.
Here is the syntax :
#include <CppUnitTest.h>
#include "..\MyProjectUnderTest\MyCodeUnderTest.h"
using namespace Microsoft::VisualStudio::CppUnitTestFramework;
TEST_CLASS(TestClassName)
{
public:
TEST_METHOD(TestMethodName)
{
// Run a function under test here.
int actualValue = MyProject::Multiply(2,3);
int expectedValue = 6;
Assert::AreEqual(expectedValue, actualValue, L"Error, the values do not match.", LINE_INFO());
}
}

Related

Disable Unit Test MSTest

I have been tasked with repairing our decrepid unit test framework and I'm simply trying to disable a few failing tests, but I don't know how to do this in code. In C#, it's as simple as adding the [Ignore] attribute and, in C++, I figured out how to disable all of them for a particular class, but I want to do it with specific tests as well:
BEGIN_TEST_CLASS_ATTRIBUTE()
TEST_CLASS_ATTRIBUTE(L"Ignore", L"true")
END_TEST_CLASS_ATTRIBUTE()
Does anyone know how to disable a specific unit test in a source file in C++ using the MSTest framework? Thanks in advance, Google has not been of much help!
You can do this:
BEGIN_TEST_METHOD_ATTRIBUTE(Test_Name)
TEST_METHOD_ATTRIBUTE(L"Ignore", L"true")
END_TEST_METHOD_ATTRIBUTE()
TEST_METHOD(Test_Name)
{
// code
}
Or this:
BEGIN_TEST_METHOD_ATTRIBUTE(Test_Name)
TEST_IGNORE()
END_TEST_METHOD_ATTRIBUTE()
TEST_METHOD(Test_Name)
{
// code
}
Check More here

Google Mock and Catch.hpp Integration

I really like catch.hpp for testing (https://github.com/philsquared/Catch). I like its BDD style and its REQUIRE statements, its version of asserts. However, catch does not come with a mocking framework.
The project I'm working on has GMock and GTest but we've used catch for a few projects as well. I'd like to use GMock with catch.
I found 2 conflicts in the catch.hpp and gtests header files for the macros FAIL and SUCCEED. Since I'm not using the TDD style but instead the BDD style I commented them out, I checked that they weren't referenced anywhere else in catch.hpp.
Problem: Using EXPECT_CALL() doesn't return anything or have callbacks to know if the EXPECT passed. I want to do something like:
REQUIRE_NOTHROW(EXPECT_CALL(obj_a, an_a_method()).Times(::testing::AtLeast(1)));
Question: How can I get a callback if EXPECT_CALL fails (or a return value)
EDIT: Figured out how to integrate it and put an example in this github repo https://github.com/ecokeley/catch_gmock_integration
After hours of searching I went back to gmock and just read a bunch about it. Found this in "Using Google Mock with Any Testing Framework":
::testing::GTEST_FLAG(throw_on_failure) = true;
::testing::InitGoogleMock(&argc, argv);
This causes an exception to be thrown on a failure. They recommend "Handling Test Events" for more seamless integration.
class MinimalistPrinter : public ::testing::EmptyTestEventListener {
// Called after a failed assertion or a SUCCEED() invocation.
virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) {
printf("%s in %s:%d\n%s\n",
test_part_result.failed() ? "*** Failure" : "Success",
test_part_result.file_name(),
test_part_result.line_number(),
test_part_result.summary());
}
}
Because of the macros FAIL and SUCCEED in version 1.8.0 gmock added the following to gtest.h:
#if !GTEST_DONT_DEFINE_FAIL
# define FAIL() GTEST_FAIL()
#endif
#if !GTEST_DONT_DEFINE_SUCCEED
# define SUCCEED() GTEST_SUCCEED()
#endif
So by adding GTEST_DONT_DEFINE_FAIL and GTEST_DONT_DEFINE_SUCCEED to the preprocessor definitions you will avoid the conflict
I created a small example how to integrate GMock with Catch2.
https://github.com/matepek/catch2-with-gmock
Hope it helps someone.
Disclaimer: It is not bulletproof. Feel free to contribute and improve.
There is also gtestbdd in the cppbdd project which adds BDD support in a single header for gtest (rather than replacing it). It recently had an improvement to enable parameterized tests to work in a BDD style. There is a tutorial in the readme of:
https://github.com/Resurr3ction/cppbdd

Understanding metaClass in Grails tests

I'm currently learning grails, and working through the guide on testing.
There's an example provided which covers writing a test for this piece of code in a fictional BookController:
def show = {
[ book : Book.get( params.id ) ]
}
The guide suggests the following approach for mocking out the result of params.id:
void testA() {
BookController.metaClass.getParams = {-> [id:10] }
}
As this is a change on the static definition of BookController, does this persist between tests, or does the Grails magic somehow automatically clean up in the tearDown method?
ie, if I was to write a subsequent test that skipped the setup of metaClass.getParams and that ran after testA, would params.id still return 10?
If so, what's the standard grails practice for cleaning up in test tear-down? It doesn't seem to be covered in the guide that I'm reading.
You're using an ancient version of the docs covering 1.0.x. Testing support is a lot more solid now, so see the updated chapter 9 in http://grails.org/doc/latest/

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.

Implementing xunit in a new programming language

Some of us still "live" in a programming environment where unit testing has not yet been embraced. To get started, the obvious first step would be to try to implement a decent framework for unit testing, and I guess xUnit is the "standard".
So what is a good starting point for implementing xUnit in a new programming language?
BTW, since people are asking: My target environment is Visual Dataflex.
Which language is it for - there are quite a few in place already.
If this is stopping you from getting started with writing unit tests you could start out without a testing framework.
Example in C-style language:
void Main()
{
var algorithmToTest = MyUniversalQuestionSolver();
var question = Answer to { Life, Universe && Everything };
var actual = algorithmToTest(question);
var expected = 42;
if (actual != expected) Error();
// ... add a bunch of tests
}
Example in Cobol-style language:
MAIN.
COMPUTE EXPECTED_ANSWER = 42
SOLVE ANSWER_TO_EVERYTHING GIVING ACTUAL_ANSWER
SUBTRACT ACTUAL_ANSWER FROM EXPECTED_ANSWER GIVING DIFFERENCE
IF DIFFERENCE NOT.EQ 0 THEN
DISPLAY "ERROR!"
END-IF
* ... add a bunch of tests
STOP RUN
Run Main after you are finished with a changed (and possibly compile) on your code. Run main on the server whenever someone submits code to your repository.
When you get hooked, Look more for a framework or see if you possibly could factor out some of the bits from Main to your own framework.
I'd suggest that a good starting point would be to use xunit on a couple of other languages to get a feel for how this style of unit test framework works. Then you'll need to go in depth into the behaviour and start working out how to recreate that behaviour in a way that fits with your new language.
I created a decent unit test framework in VFP by basing it on the code in Test Driven Development: A Practical Guide, by David Astels. You'll get a long way by reading through the examples, understanding the techniques and translating the Java code into your language.
I found Pragmatic Unit Testing in C# with NUnit very helpful!