I want to write unit test witch work on an object. The difference to normal fixtures is, that i don't want the fixture to be run before every test. The SetUp() of the fixture should be run only once, then a couple of tests should be performed. After these tests the TearDown() of the fixture should be performed.
I am using googletest in c++. Is there a possibility to achieve this behaviour?
Example to clearify:
#include "gtest/gtest.h"
class socketClientFixture : public testing::Test
{
public:
CSocketClient *mClient;
void SetUp()
{
mClient = new CSocketClient();
mClient->connect();
}
void TearDown()
{
mClient->disconnect();
delete mClient;
}
TEST_F(socketClientFixture, TestCommandA)
{
EXPECT_TRUE(mClient->commandA());
}
TEST_F(socketClientFixture, TestCommandB)
{
EXPECT_TRUE(mClient->commandA());
}
int main(int ac, char* av[])
{
::testing::InitGoogleTest(&ac, av);
int res = RUN_ALL_TESTS();
return res;
}
In the example above i don't want the TearDown() to be called after TestCommandA and SetUp() before TestCommandB.
The behaviour i want to achieve is:
SetUp()
TestCommandA
TestCommandB
TearDown()
This is due to the fact that the server needs some time after disconnecting to perform some operations.
Any help appreciated.
There is no built in way to do what you ask for, specifically because you are asking for the tests to be ordered.
You may pull more ideas from this section in the advanced google-test documentation
If you are willing to sacrifice the test order, you could follow the exact example given in the link above and define static void SetUpTestCase() and static void TearDownTestCase(). Any tests written for that class will participate in the same fixture.
Do note that if you can avoid this altogether and instantiate the connection as a Mock (see google-mock) it would be a much better alternative. This will decouple your test from the server you want to connect to so you can test your code instead of testing the server. This will also make your tests run much faster.
Related
I've implemented a C++ Class that will execute something in a timed cycle using a thread. The thread is set to be scheduled with the SCHED_DEADLINE scheduler of the Linux kernel. To setup the Scheduler the process running this must have certain Linux capabilities.
My question is, how to test this?
I can of course make a unit test and create the threat, do some counting an exit the test after a time to validate the cycle counter but that only works if the unit test is allowed to apply the right scheduler. If not, the default scheduler applies and the timing of the cyclic loops will be immediate and therefore executes a different behaviour.
How would you test this scenario?
Some Code Example:
void thread_handler() {
// setup SCHED_DEADLINE Parameters
while (running) {
// execute application logic
sched_yield();
}
}
There two separate units to test here. First the cyclic execution of code and second the strategy with the os interface. The first unit would look like this:
class CyclicThread : public std::thread {
public:
CyclicThread(Strategy& strategy) :
std::thread(bind(&CyclicThread::worker, this)),
strategy(strategy) { }
add_task(std::function<void()> handler) {
...
}
private:
Strategy& strategy;
void worker() {
while (running) {
execute_handler()
strategy.yield();
}
}
}
This is fairly easy to test with a mock object of the strategy.
The Deadline scheduling strategy looks like this:
class DeadlineStrategy {
public:
void yield() {
sched_yield();
}
}
This class can also be tested fairly easy by mocking the sched_yield() system call.
I am using benchmark library to benchmark some codes. I want to call a setup method before calling the actual benchmark code one time and not to be repeated everytime, for multiple benchmark method calls.. For e.g:
static void BM_SomeFunction(benchmark::State& state) {
// Perform setup here
for (auto _ : state) {
// This code gets timed
}
}
As we can see the setup code will be called multiple times here, for the range I specify. I did have a look at the fixture tests. But my question is can it be done without using fixture tests. And if yes then how can we do it?
As far as I can remember, the function is called multiple times, since benchmark dynamically decides how many times your benchmark needs to be run in order to get reliable results. If you don't want to use fixtures, there are multiple workarounds. You can use a global or static class member bool to check if the setup function was already called (don't forget to set it after the setup routine has run). Another possibility is to use a Singleton that calls the setup method in its ctor:
class Setup
{
Setup()
{
// call your setup function
std::cout << "singleton ctor called only once in the whole program" << std::endl;
}
public:
static void PerformSetup()
{
static Setup setup;
}
};
static void BM_SomeFunction(benchmark::State& state) {
Setup::PerformSetup()
for (auto _ : state) {
// ...
}
}
However, fixtures are quite simple to use and are made for such use-cases.
Define a fixture class which inherits from benchmark::Fixture:
class MyFixture : public benchmark::Fixture
{
public:
// add members as needed
MyFixture()
{
std::cout << "Ctor only called once per fixture testcase hat uses it" << std::endl;
// call whatever setup functions you need in the fixtures ctor
}
};
Then use the BENCHMARK_F macro to use your fixture in the test.
BENCHMARK_F(MyFixture, TestcaseName)(benchmark::State& state)
{
std::cout << "Benchmark function called more than once" << std::endl;
for (auto _ : state)
{
//run your benchmark
}
}
However, if you use the fixture in multiple benchmarks, the ctor will be called multiple times. If you really need a certain setup function to be called only once during the whole benchmark, you can use a Singleton or a static bool to work around this as described earlier. Maybe benchmark also has a built-in solution for that, but I don't know it.
Alternative to Singleton
If you don't like the singleton class, you can also use a global function like this:
void Setup()
{
static bool callSetup = true;
if (callSetup)
{
// Call your setup function
}
callSetup = false;
}
Greetings
I have a method with following code:
public void myMethod() {
if (condition1) {
doSomething1();
if (condition2) {
doSomething2();
} else {
doSomething3();
}
}
}
Now doSomething1, doSomething2, doSomething3 are void methods.
How to unit-test myMethod ?
eg: if condition1 is satisfied check if doSomething1 was called.
Is there something we can do to refactor this to make easily testable ?
A general approach could be 3 test cases. Each test case would exercise a single condition. For each test case:
doSomethingX would be patched with a test object, (there are mock libraries for pretty much all languages)
conditionX would be triggered
doSomethingX would execute
test would assert that doSomethingX was actually called
There are many strategies for removing the need to mock.
if doSomethingX is an instance method then you could create a test specific subclass and override doSomethingX and make your assertion in the subclass.
You could also refactor your method to require the caller to inject the doSomethingX dependency (dependency injection)
public void myMethod(somethingStrategy)
Then the test could easily configure a mock object and call myMethod with the mock object.
Dependency injection could take place on the class level by having the class be instantiated with a somethingStrategy as well.
(a) The fact that these methods return void means their outcome is irrelevant, we just don't care. But you need to test the outcome, and need therefore to know the outcome. So this is a huge red flag that the code isn't SOLID and needs refactoring.
(b) Hey, this could be legacy code that is impossible to change, so if your methods truly are voids, then the following refactoring could help, if you then assert that the myNewMethod2 and doSomething2/3 are called once / not called at all depedndant upon the conditions (e.g. via MOQ unit testing framework)
public myNewMethod()
{
bool cnd1 = (condition1);
bool cnd2 = (condition2);
if(cnd1)
{
myNewMethod2(cnd2);
}
}
public myNewMethod2(bool cnd2)
{
doSomething1();
myNewMthod3(cnd2);
}
public myNewMethod3(bool cnd2)
{
if (cnd2)
{
doSomething2();
}
else
{
doSomething3();
}
}
(c) Another strategy for voids, which I'm not a great fan of, but leaves your original code largely intact, is this:
public void myMethod() {
try
{
if (condition1) {
doSomething1();
if (condition2) {
doSomething2();
} else {
doSomething3();
}
}
catch(Exception ex)
{
//
}
}
Your unit test can then assert that no exception is thrown. Not ideal, but if needs must...
I have multiple unit tests, each per class in a separate file.
One of my standard unit tests looks like this:
#include "gmock/gmock.h"
#include "gtest/gtest.h"
class ClassAUnitTest : public ::testing::Test {
protected:
// Per-test-case set-up.
// Called before the first test in this test case.
// Can be omitted if not needed.
static void SetUpTestCase() {
//..
}
// Per-test-case tear-down.
// Called after the last test in this test case.
// Can be omitted if not needed.
static void TearDownTestCase() {
//..
}
// You can define per-test set-up and tear-down logic as usual.
virtual void SetUp() { }
virtual void TearDown() {
}
// Some expensive resource shared by all tests.
//..
};
TEST_F(ClassAUnitTest, testCase1) {
// Assign .. Act .. Assert.
}
The way I know is to place DISABLED_ in front of the test case like this:
TEST_F(ClassAUnitTest, DISABLED_testCase1) {
// Assign .. Act .. Assert.
}
However it is very impractical to run all tests when working on one failing unit test.
I use Visual Studio Ultimate 2013 with Gmock 1.7.0.
Question: How can I easily select which Unit tests or specific tests to run, and which ones not?
First of all, your unit tests should be lightning fast. Otherwise people are not going to execute them.
As explained in Selecting tests, you cam use --gtest_filter= option. In your specific case : --gtest_filter=ClassAUnitTest.*
I have code I want to extract from a unit test to make my test method clearer:
Check check;
check.Amount = 44.00;
// unit testing on the check goes here
How should I extract this? Should I use a pointer to the check or some other structure to make sure it's still allocated when I use the object?
I don't want to use a constructor because I want to isolate my test creation logic with production creation logic.
In a modern unit testing framework you usually have testing case as
class MyTest: public ::testing::Test {
protected:
MyTest() {}
~MyTest() {}
virtual void SetUp() {
// this will be invoked just before each unit test of the testcase
// place here any preparations or data assembly
check.Amount = 44.00;
}
virtual void TearDown() {
// this will be inkoved just after each unit test of the testcase
// place here releasing of data
}
// any data used in tests
Check check;
};
// single test that use your predefined preparations and releasing
TEST_F(MyTest, IsDefaultInitializedProperly) {
ASSERT_FLOAT_EQ(44., check.Amount);
}
// and so on, SetUp and TearDown will be done from scratch for every new test
You can find such functionality i.e. in Google Test Framework (https://github.com/google/googletest/)