Google Mock and Catch.hpp Integration - c++

I really like catch.hpp for testing (https://github.com/philsquared/Catch). I like its BDD style and its REQUIRE statements, its version of asserts. However, catch does not come with a mocking framework.
The project I'm working on has GMock and GTest but we've used catch for a few projects as well. I'd like to use GMock with catch.
I found 2 conflicts in the catch.hpp and gtests header files for the macros FAIL and SUCCEED. Since I'm not using the TDD style but instead the BDD style I commented them out, I checked that they weren't referenced anywhere else in catch.hpp.
Problem: Using EXPECT_CALL() doesn't return anything or have callbacks to know if the EXPECT passed. I want to do something like:
REQUIRE_NOTHROW(EXPECT_CALL(obj_a, an_a_method()).Times(::testing::AtLeast(1)));
Question: How can I get a callback if EXPECT_CALL fails (or a return value)

EDIT: Figured out how to integrate it and put an example in this github repo https://github.com/ecokeley/catch_gmock_integration
After hours of searching I went back to gmock and just read a bunch about it. Found this in "Using Google Mock with Any Testing Framework":
::testing::GTEST_FLAG(throw_on_failure) = true;
::testing::InitGoogleMock(&argc, argv);
This causes an exception to be thrown on a failure. They recommend "Handling Test Events" for more seamless integration.
class MinimalistPrinter : public ::testing::EmptyTestEventListener {
// Called after a failed assertion or a SUCCEED() invocation.
virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) {
printf("%s in %s:%d\n%s\n",
test_part_result.failed() ? "*** Failure" : "Success",
test_part_result.file_name(),
test_part_result.line_number(),
test_part_result.summary());
}
}

Because of the macros FAIL and SUCCEED in version 1.8.0 gmock added the following to gtest.h:
#if !GTEST_DONT_DEFINE_FAIL
# define FAIL() GTEST_FAIL()
#endif
#if !GTEST_DONT_DEFINE_SUCCEED
# define SUCCEED() GTEST_SUCCEED()
#endif
So by adding GTEST_DONT_DEFINE_FAIL and GTEST_DONT_DEFINE_SUCCEED to the preprocessor definitions you will avoid the conflict

I created a small example how to integrate GMock with Catch2.
https://github.com/matepek/catch2-with-gmock
Hope it helps someone.
Disclaimer: It is not bulletproof. Feel free to contribute and improve.

There is also gtestbdd in the cppbdd project which adds BDD support in a single header for gtest (rather than replacing it). It recently had an improvement to enable parameterized tests to work in a BDD style. There is a tutorial in the readme of:
https://github.com/Resurr3ction/cppbdd

Related

Mocking method from golang package

I have been unable to find a solution to mocking methods from golang packages.
For example, my project has code that attempts to recover when Os.Getwd() returns an error. The easiest way I can thinking of making a unit test for this, is by mocking the Os.Getwd() method to return an error, and verify that the code works accordingly.
I tried using testify, but it does not seem to be possible.
Anyone have any experience with that?
My own solution was to take the method as an argument, which allow to inject a "mock" instead when testing. Additionnaly, create an exported method as public facade and an unexported one for testing.
Example:
func Foo() int {
return foo(os.Getpid)
}
func foo(getpid func() int) int {
return getpid()
}
Looks like that taking a look at the os.Getwd test could give you some example of how you could test your code. Look for the functions TestChdirAndGetwd and TestProgWideChdir.
From reading those, it seems that the tests create temporary folders.
So a pragmatic approach would be to create temporary folders, like the tests mentioned above do, then break them so os.Getwd throws an error for you to catch on your test.
Just be careful doing these operations as they can mess up your system. I'd suggest testing in a lightweight container or a virtual machine.
I know this is a bit late but, here is how you can do it.
Testing DAL or SystemCalls or package calls is usually difficult. My approach to solve this problem is to push your system function calls behind an interface and then mock the functions of those interface. For example.
type SystemCalls interface {
Getwd() error
}
type SystemCallsImplementation struct{
}
func (SystemCallsImplementation) Getwd() error{
return Os.Getwd()
}
func MyFunc(sysCall SystemCalls) error{
sysCall.Getwd()
}
With this you inject your interface that has the system calls to your function. Now you can easily create a mock implementation of your interface for testing.
like
type MockSystemCallsImplementation struct{
err error
}
func (MockSystemCallsImplementation) Getwd() error{
return err //this can be set to nil or some value in your test function
}
Hope this answers your question.
This is the limitation of go compiler, google developers don't want to allow any hooks or monkey patching. If unit tests are important for you - than you have to select a method of source code poisoning. All these methods are the following:
You can't use global packages directly.
You have to create isolated version of method and test it.
Production version of method includes isolated version of method and global package.
But the best solution is to ignore go language completely (if possible).

How to mark a Google Test test-case as "expected to fail"?

I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".
Is there a way to do this?
EDIT:
I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.
EDIT2:
It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.
EXPECT_NONFATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:
#include "gtest-spi.h"
// ...
TEST_F( testclass, testname )
{
EXPECT_NONFATAL_FAILURE(
// your code here, or just call:
FAIL()
,"Some optional text that would be associated with"
" the particular failure you were expecting, if you"
" wanted to be sure to catch the correct failure mode" );
}
Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures
You can prefix the test name with DISABLED_.
I'm not aware of a direct way to do this, but you can fake it with something like this:
try {
// do something that should fail and throw and exception
...
EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
// return or print a message, etc.
}
Basically, the test will fail if it reaches the contradictory expectation.
It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.
In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.
Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.

Tools similar to Google Test for C++ unit testing?

Are there any tools similar to GoogleTest for the purpose of functional testing in C++.
I plan to do them as part of Unit Testing and would like to know of other options available so that I can make an informed choice.
Take a look at this.
http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle.
And I personally use this, I think it is pretty good.
http://unittest-cpp.sourceforge.net/
You can have a look a this for a short list of frameworks that you may explore.
Also, here is why you should use Google Test, from the tutorial itself. If find GTest easy to use, test are verbose enough and documentation is clear.
If you are using Visual studio, it embdeds a Test Unit framework.
I just tried the exemple available on the MSDN site, it works pretty well.
Here is the syntax :
#include <CppUnitTest.h>
#include "..\MyProjectUnderTest\MyCodeUnderTest.h"
using namespace Microsoft::VisualStudio::CppUnitTestFramework;
TEST_CLASS(TestClassName)
{
public:
TEST_METHOD(TestMethodName)
{
// Run a function under test here.
int actualValue = MyProject::Multiply(2,3);
int expectedValue = 6;
Assert::AreEqual(expectedValue, actualValue, L"Error, the values do not match.", LINE_INFO());
}
}

Can I make Intellij Idea 11 IDE aware of assertEquals and other JUnit methods in Grails 2.0.x unit tests?

I find it very odd that with such excellent Grails integration, Idea does not recognize standard JUnit assertion methods in Grails unit tests. I created a brand new project and made one domain class with corresponding test to make sure it wasn't something weird with my larger project. Even if I add a #Test annotation, the IDE does not see any assertion methods
#TestFor(SomeDomain)
class SomeDomainTests {
#Test //thought adding this, not needed for Grails tests, would help but it doesn't
void testSomething() {
assertEquals("something", 1, 1); //test runs fine, but IDE thinks this method and any similar ones don't exist
}
}
I have created an issue in IntelliJ bugtracker: http://youtrack.jetbrains.com/issue/IDEA-82790. It will be fixed in IDEA 11.1.0
As workaround you can add "import static org.junit.Assert.*" to imports.
Note: using "assert 1 == 1 : 'message'" is preferable than "assertEquals('message', 1, 1)" in groovy code.
Idea has problems if you use 'def' to define a variable (so it's type is not known) and then you try to pass it to a Java method which is strongly typed. Because it can't infer the type.
So it will give a message with words to the effect of "there is no method assertEquals() that takes arguments with type String, null, null".
I wouldn't expect this message in the example you give (because you are using ints directly, not a dynamically-typed variable) but I thought you might have missed it when trying to create a simple example code snippet for the question.
With the #TestFor annotation an AST will add methods to you test class and IDEA does not catch these methods.
You have two options:
Make the test class extends GrailsUnitTestCase.
Add dynamic method to your test class.

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.