I am fairly new to TDD and not so seasoned at unit testing, hence the question.
I have this legacy function written in PHP
function foo(){
x = bar();
y = baz();
if (x > y)
return 'greater';
return 'lesser';
}
If x (value returned by bar()) is always greater than y (value returned by baz()), I will never be able to test for 'lesser' return statement.
What should I do to cover both the test cases and achieve 100% code coverage?
Redefining foo() as foo(x, y) for dependency injection hooks is not an option with legacy code.
I am assuming foo, bar and baz are all global functions. (If they are part of a class, you want to be using PHPUnit's mocking functionality).
I blogged before about how to use a pecl extension to replace a built-in function:
http://darrendev.blogspot.jp/2012/07/mock-socket-in-php.html
This article shows a very interesting alternative approach using namespaces:
http://marcelog.github.io/articles/php_mock_global_functions_for_unit_tests_with_phpunit.html
It appears you will need to wrap your legacy code in a file with a namespace declaration at the top. I don't know if that is a show-stopper for you or not.
Since bar() and baz() do not take input parameters, they are either returning a constant
(and you can immediately refactor foo() to { return 'greater' } ; or they depend on some
external variable(s). In that case, do something like
function testFooReturnsGreater() {
setEnvironmentSoBarIsGreaterThanBaz()
assert ("greater".equals(foo())
}
function testFooReturnsLesser() {
setEnvironmentSoBarIsLesserThanBaz()
assert("lesser".equals(foo())
}
Since you say bar() > baz() unless it's Christmas, the setEnvironmentxxx() fixtures would need to change the program's notion of the current date (hopefully something you can mock, and not the actual system clock).
Related
I have a legacy interface that has a function with a signature that looks like the following:
int provide_values(int &x, int &y)
x and y are considered output parameters in this function. Note: I'm aware of the drawbacks of using output parameters and that there are better design choices for such an interface. I'm not trying to debate the merits of this interface.
Within the implementation of this function, it first checks to see if the addresses of the two output parameters are the same, and returns an error code if they are.
if (&x == &y) {
return -1; // Error: both output parameters are the same variable
}
Is there a way at compile time to prevent callers of this function from providing the same variable for the two output parameters without having such a check within the body of the function? I'm thinking of something similar to the restrict keyword in C, but that only is a signal to the compiler for optimization, and only provides a warning when compiling code that calls such a function with the same pointer.
No, there's not. Keep in mind that the calling code could derive x and y from references returned from some arbitrary black-box functions. But even otherwise, it is provably impossible (by the Incompleteness Theorem) for the compiler to robustly determine whether they point to the same object, since what objects they are bound to is determined by the execution of the program.
If all you want to do is preventing that the user calls provide_values(xyz, xyz), you can use a macro as in the following example. However, this won't protect the user from calling provide_values(xyz, reference_to_xyz), so the whole this is probably pointless anyway.
#include <cstring>
void provide_values(int&, int&) {}
#define PROV_VAL(x, y) if (strcmp((#x),(#y))) { provide_values(x, y); } else { throw -1; }
int main()
{
int x;
int y;
PROV_VAL(x,y);
//PROV_VAL(x,x); // this throws
int& z = x;
PROV_VAL(x,z); // this passes though!
}
I have a C++ 11 header which has a const value declared as my_const_value. And a function called GetValue that runs a complex logic using the const value and returns an expected value.
I want to unit test GetValue with different values of my_const_value.
I know this is not advisable but for writing a unit test for GetValue, I wish to test GetValue with different values of my_const_value. Is there some hack-ish way in C++ to change the value of a const even if it a const?
//MyHeader.hpp
namespace myheader {
const int my_const_value = 5;
int GetValue() {
// In real-world, lets say below line of code is a complex logic that needs to be tested by a unit test
return /my_const_value * 5) / 25;
}
}
#include "MyHeader.hpp"
#include <gtest/gtest.h>
TEST(MyHeaderTest, Testing_Something) {
EXPECT_EQ(1, myheader::GetValue()); // This is okay
// I want to test that in the future is the value of my_const_value changes to something else then
// myheader::GetValue returns the expected result. But of course, I cannot change my_const_value because it is a const.
// Is there a way to hack around this for a unit test? Is there a way that I could still hack and change the value of my_const_value?
myheader::my_const_value = 25;
EXPECT_EQ(5, myheader::GetValue());
}
I know that I could const_cast my_const_value to a non_const variable. But that wouldn't help here. If there is some hack to change the value of my_const_value by using a pointer or something, that would answer my question.
No.
Changing the value of something that is declared as const invokes undefined beahviour. For illustration, consider that this code
const int x = 4;
modify_const_somehow(x,42); // "magically" assigns 42 to x
std::cout << x;
may print anything. You could see 4 on the console or 42, but "Hey you broke the rules, const cannot be modified" would be a valid output as well. No matter how you modfiy x, the code has undefined behavior. Compilers are not required to issue an error or warning, the code is simply invalid and compilers are not mandated to do anything meaningful to it.
The only situation where you are allowed to remove constness is when the object actually is not const. Sounds weird no? See this example:
const int x = 42;
int y = 100;
void foo(const int& a) {
const_cast<int&>(a) = 4;
}
foo(x); // undefined behavior !!
foo(y); // OK !!
The solution to your problem is to write testable code. For example:
int GetValue(int value = my_const_value) {
// In real world, lets say below line of code is a complex logic that needs to be tested by a unit test
return (value * 5) / 25;
}
If you like to keep the original signature you can also wrap it (as suggested in a comment):
int GetValue_impl(int value) {
return (value * 5) / 25;
}
int GetValue() {
return GetValue_impl(my_const_value);
}
Now you can test GetValue_impl while GetValue uses the constant. However, I really wonder why you want to test a case that cannot happen.
I know you are looking for a way how to cast the const away, but I probably would go a different way.
You say in your comment:
Well. I have given my reason. I am testing the logic in GetValue. I have declared my_const_value as a const but that can be changed from 5 to something else in future when someone changes the value in future.
If a variable is const and particiaptes in a expresion within a function without being passed to it, then those changes normally shouldn't happen at a regular basis, and should not be expected. If you consider the myheader::my_const_value a config value, and as of that might change anytime, then it should be passed to the function in which it is used in an expresion.
So from the perspective of testing I agree with what idclev 463035818 suggest in the answer, to split the function in two parts in one testable part with a prameter and one that uses a constant.
One test that tests how the code currently should behave (what constant it should have)
TEST(MyHeaderTest, Testing_Something1) {
EXPECT_EQ(5, myheader::my_const_value)
EXPECT_EQ(1, myheader::GetValue());
}
And one for the generic test:
TEST(MyHeaderTest, Testing_Something2) {
EXPECT_EQ(1, myheader::GetValue_impl(5));
EXPECT_EQ(5, myheader::GetValue_impl(25));
// …
}
That way you have the generic test if the caluclation used by GetValue works. And one if for the current verion of you code the value of myheader::GetValue() is the expected one.
Hi Hope you are doing good.
Just try #define const /* myComment */ in above the stubbed function
You asked for hackish, here it is :
// define the string const to be replaced by empty string
#define const
#include "header.hh"
#undefine const
...tests...
The only issue I see is that 1.all the const modifiers fall out, which may or not be a problem. 2. it is kind of intrusive, the compiler as others mentioned treats constants in a particular way, so your test is not quite testing the same code that will run in your real use case.
There is a similar trick that starts with #define private public then including the header, for accessing private fields of another class from a library. Nice thing is it does not even break when you link on the library.
Note that none of these things are recommended, they are hackish, but leveraging the preprocessor to bias included files is fine. It will clear all const modifiers, and transitively in other included headers so pretty intrusive.
Less hackish is to have a macro TEST and put in your header #ifdef TEST /*non const decl*/ #else /*const decl*/. Then you obviously do #define TEST before including the header which is cleaner than redefining keywords.
I have code that does something like this:
//datareader.cpp
if (populateFoo(dataReader, foo))
else {
// Do other things with the reader.
}
//foo.cpp
bool populateFoo(const DataReader &dataReader, Foo &foo)
{
if (dataReader.name() == "bar") {
foo.bar() = dataReader.value();
return true;
} // More similar checks.
return false;
}
I feel like it's misleading to have an if statement with conditions that have side-effects. However, I can't move the body of the populateFoo function into datareader.cpp. Is there a good way to restructure this code so we get rid of this misleading if statement, without duplicating the body of populateFoo()?
Do you have a strong hatred of local variables? If not:
bool populated = populateFoo(dataReader, foo);
if (populated)
{
// Do things
}
else
{
// Do other things
}
The compiler will almost certainly emit exactly the same code, so performance shouldn't be an issue. It's a readability/style choice, ultimately.
The obvious solution seems like storing the result of populateFoo and using it for determining whether populateFoo was successful:
bool fooPopulated = populateFoo(dataReader, Foo);
if (!fooPopulated)
//Do other things with reader.
However, I don't find the original difficult to understand, and it's a fairly well-established practice to both modify values and test the success of the modification in the same line. However, I would change it to:
if (!populateFoo(dataReader, Foo)
//Do other things with reader.
How about:
if (!populateFoo(dataReader, foo)) {
// Do other things with the reader.
}
Edit: The title of the question suggests it is the fact the if statement is empty that bothers you but the body seems more that it is the side effect that is the concern. I think it's fine in C++ to have conditions in if statements that have side effects but this won't solve your issue if you want to avoid that.
Having conditions with side-effects is quite common - think about calling a C API and checking its return code for errors.
Usually, as long as it's not buried in a complicated expression where it may be missed by the casual bystander, I don't bother to do particular refactorings, but, in case you wanted to make it extra clear (or document what the return value is, which is particularly useful in case of booleans) just assign it to a variable before the branch - or even just a few comments may help.
You could split the populateFoo function into two, a const check function (shouldPopulateFoo) that checks the condition, and another non-const function that performs the actual modifications (populateFoo):
//datareader.cpp
if (shouldPopulateFoo(dataReader)) {
populateFoo(dataReader, foo);
}
else {
// Do other things with the reader.
}
//foo.cpp
bool shouldPopulateFoo(const DataReader &dataReader) /* const */
{
return (dataReader.name() == "bar");
}
void populateFoo(const DataReader &dataReader, Foo &foo) /* non-const */
{
assert(shouldPopulateFoo(dataReader));
foo.bar = dataReader.value();
}
Note that when using these functions as class methods, you could declare the check function const.
How about:
if (populateFoo(dataReader, foo) == false) {
// Do other things with the reader.
}
It is very readable, I often see code where the returned value from function is a signal to the caller for branching in the caller. The else block with empty if block bothers me more then the side effects inside the if (). There is a sense of reverse logic, which is alway less readable.
For example, I have to assure that a certain function for a certain real-time system works for 20 ms or less. I can simply measure time at the beginning of a function and at the end of it, then assert the difference to be satisfactory. And I do this in C++.
But this look pretty much like contract, except time checking is a post-condition, and time measurement at the beginning is not a condition at all. It would be nice to put it into contract not only for the notation of it, but for building reasons as well.
So I wonder, can I use contract capabilities to check the time of function working?
Sort of, but not really well. The reason is variables declared in the in{} block are not visible in the out{} block. (There's been some discussing about changing this, so it can check pre vs post state by making a copy in the in block, but nothing has been implemented.)
So, this will not work:
void foo()
in { auto before = Clock.currTime(); }
out { assert(Clock.currTime - before < dur!"msecs"(20)); }
body { ... }
The variable from in won't carry over to out, giving you an undefined identifier error. But, I say "sort of" though because there is a potential workaround:
import std.datetime;
struct Foo {
SysTime test_before;
void test()
in {
test_before = Clock.currTime();
}
out {
assert(Clock.currTime - test_before < dur!"msecs"(20));
}
body {
}
}
Declaring the variable as a regular member of the struct. But this would mean a lot of otherwise useless variables for each function, wouldn't work with recursion, and just pollutes the member namespace.
Part of me is thinking you could do your own stack off to the side and have in{} push the time, then out{} pops it and checks.... but a quick test shows that it is liable to break once inheritance gets involved. If you repeat the in{} block each time, it might work. But this strikes me as awfully brittle. The rule with contract inheritance is ALL of the out{} blocks of the inheritance tree need to pass, but only any ONE of the in{} blocks needs to pass. So if you had a different in{} down the chain, it might forget to push the time, and then when out tries to pop it, your stack would underflow.
// just for experimenting.....
SysTime[] timeStack; // WARNING: use a real stack here in production, a plain array will waste a *lot* of time reallocating as you push and pop on to it
class Foo {
void test()
in {
timeStack ~= Clock.currTime();
}
out {
auto start = timeStack[$-1];
timeStack = timeStack[0 .. $-1];
assert(Clock.currTime - start < dur!"msecs"(20));
import std.stdio;
// making sure the stack length is still sane
writeln("stack length ", timeStack.length);
}
body { }
}
class Bar : Foo {
override void test()
in {
// had to repeat the in block on the child class for this to work at all
timeStack ~= Clock.currTime();
}
body {
import core.thread;
Thread.sleep(10.msecs); // bump that up to force a failure, ensuring the test is actually run
}
}
That seems to work, but I think it is more trouble than it's worth. I expect it would break somehow as the program got bigger, and if your test breaks your program, that kinda defeats the purpose.
I'd probably do it as a unittest{}, if only checking with explicit tests fulfills you requirements (however, note that contracts, like most asserts in D, are removed if you compile with the -release switch, so they won't actually be checked in release versions either. If you need it to reliably fail, throw an exception rather than assert, since that will always work, in debug and release modes.).
Or you could do it with an assert in the function or a helper struct or whatever, similar to C++. I'd use a scope guard:
void test() {
auto before = Clock.currTime();
scope(exit) assert(Clock.currTime - before < dur!"msecs"(20)); // or import std.exception; and use enforce instead of assert if you want it in release builds too
/* write the rest of your function */
}
Of course, here you'll have to copy it in the subclasses too, but it seems like you'd have to do that with the in{} blocks anyway, so meh, and at least the before variable is local.
Bottom line, I'd say you're probably best off doing it more or less the same way you have been in C++.
So I ran across this (IMHO) very nice idea of using a composite structure of a return value and an exception - Expected<T>. It overcomes many shortcomings of the traditional methods of error handling (exceptions, error codes).
See the Andrei Alexandrescu's talk (Systematic Error Handling in C++) and its slides.
The exceptions and error codes have basically the same usage scenarios with functions that return something and the ones that don't. Expected<T>, on the other hand, seems to be targeted only at functions that return values.
So, my questions are:
Have any of you tried Expected<T> in practice?
How would you apply this idiom to functions returning nothing (that is, void functions)?
Update:
I guess I should clarify my question. The Expected<void> specialization makes sense, but I'm more interested in how it would be used - the consistent usage idiom. The implementation itself is secondary (and easy).
For example, Alexandrescu gives this example (a bit edited):
string s = readline();
auto x = parseInt(s).get(); // throw on error
auto y = parseInt(s); // won’t throw
if (!y.valid()) {
// ...
}
This code is "clean" in a way that it just flows naturally. We need the value - we get it. However, with expected<void> one would have to capture the returned variable and perform some operation on it (like .throwIfError() or something), which is not as elegant. And obviously, .get() doesn't make sense with void.
So, what would your code look like if you had another function, say toUpper(s), which modifies the string in-place and has no return value?
Have any of you tried Expected; in practice?
It's quite natural, I used it even before I saw this talk.
How would you apply this idiom to functions returning nothing (that is, void functions)?
The form presented in the slides has some subtle implications:
The exception is bound to the value.
It's ok to handle the exception as you wish.
If the value ignored for some reasons, the exception is suppressed.
This does not hold if you have expected<void>, because since nobody is interested in the void value the exception is always ignored. I would force this as I would force reading from expected<T> in Alexandrescus class, with assertions and an explicit suppress member function. Rethrowing the exception from the destructor is not allowed for good reasons, so it has to be done with assertions.
template <typename T> struct expected;
#ifdef NDEBUG // no asserts
template <> class expected<void> {
std::exception_ptr spam;
public:
template <typename E>
expected(E const& e) : spam(std::make_exception_ptr(e)) {}
expected(expected&& o) : spam(std::move(o.spam)) {}
expected() : spam() {}
bool valid() const { return !spam; }
void get() const { if (!valid()) std::rethrow_exception(spam); }
void suppress() {}
};
#else // with asserts, check if return value is checked
// if all assertions do succeed, the other code is also correct
// note: do NOT write "assert(expected.valid());"
template <> class expected<void> {
std::exception_ptr spam;
mutable std::atomic_bool read; // threadsafe
public:
template <typename E>
expected(E const& e) : spam(std::make_exception_ptr(e)), read(false) {}
expected(expected&& o) : spam(std::move(o.spam)), read(o.read.load()) {}
expected() : spam(), read(false) {}
bool valid() const { read=true; return !spam; }
void get() const { if (!valid()) std::rethrow_exception(spam); }
void suppress() { read=true; }
~expected() { assert(read); }
};
#endif
expected<void> calculate(int i)
{
if (!i) return std::invalid_argument("i must be non-null");
return {};
}
int main()
{
calculate(0).suppress(); // suppressing must be explicit
if (!calculate(1).valid())
return 1;
calculate(5); // assert fails
}
Even though it might appear new for someone focused solely on C-ish languages, to those of us who had a taste of languages supporting sum-types, it's not.
For example, in Haskell you have:
data Maybe a = Nothing | Just a
data Either a b = Left a | Right b
Where the | reads or and the first element (Nothing, Just, Left, Right) is just a "tag". Essentially sum-types are just discriminating unions.
Here, you would have Expected<T> be something like: Either T Exception with a specialization for Expected<void> which is akin to Maybe Exception.
Like Matthieu M. said, this is something relatively new to C++, but nothing new for many functional languages.
I would like to add my 2 cents here: part of the difficulties and differences are can be found, in my opinion, in the "procedural vs. functional" approach. And I would like to use Scala (because I am familiar both with Scala and C++, and I feel it has a facility (Option) which is closer to Expected<T>) to illustrate this distinction.
In Scala you have Option[T], which is either Some(t) or None.
In particular, it is also possible to have Option[Unit], which is morally equivalent to Expected<void>.
In Scala, the usage pattern is very similar and built around 2 functions: isDefined() and get(). But it also have a "map()" function.
I like to think of "map" as the functional equivalent of "isDefined + get":
if (opt.isDefined)
opt.get.doSomething
becomes
val res = opt.map(t => t.doSomething)
"propagating" the option to the result
I think that here, in this functional style of using and composing options, lies the answer to your question:
So, what would your code look like if you had another function, say toUpper(s), which modifies the string in-place and has no return value?
Personally, I would NOT modify the string in place, or at least I will not return nothing. I see Expected<T> as a "functional" concept, that need a functional pattern to work well: toUpper(s) would need to either return a new string, or return itself after modification:
auto s = toUpper(s);
s.get(); ...
or, with a Scala-like map
val finalS = toUpper(s).map(upperS => upperS.someOtherManipulation)
if you don't want to follow a functional route, you can just use isDefined/valid and write your code in a more procedural way:
auto s = toUpper(s);
if (s.valid())
....
If you follow this route (maybe because you need to), there is a "void vs. unit" point to make: historically, void was not considered a type, but "no type" (void foo() was considered alike a Pascal procedure). Unit (as used in functional languages) is more seen as a type meaning "a computation". So returning a Option[Unit] does make more sense, being see as "a computation that optionally did something". And in Expected<void>, void assumes a similar meaning: a computation that, when it does work as intended (where there are no exceptional cases), just ends (returning nothing). At least, IMO!
So, using Expected or Option[Unit] could be seen as computations that maybe produced a result, or maybe not. Chaining them will prove it difficult:
auto c1 = doSomething(s); //do something on s, either succeed or fail
if (c1.valid()) {
auto c2 = doSomethingElse(s); //do something on s, either succeed or fail
if (c2.valid()) {
...
Not very clean.
Map in Scala makes it a little bit cleaner
doSomething(s) //do something on s, either succeed or fail
.map(_ => doSomethingElse(s) //do something on s, either succeed or fail
.map(_ => ...)
Which is better, but still far from ideal. Here, the Maybe monad clearly wins... but that's another story..
I've been pondering the same question since I've watched this video. And so far I didn't find any convincing argument for having Expected, for me it looks ridiculous and against clarity&cleanness. I have come up with the following so far:
Expected is good since it has either value or exceptions, we not forced to use try{}catch() for every function which is throwable. So use it for every throwing function which has return value
Every function that doesn't throw should be marked with noexcept. Every.
Every function that returns nothing and not marked as noexcept should be wrapped by try{}catch{}
If those statements hold then we have self-documented easy to use interfaces with only one drawback: we don't know what exceptions could be thrown without peeking into implementation details.
Expected impose some overheads to the code since if you have some exception in the guts of your class implementation(e.g. deep inside private methods) then you should catch it in your interface method and return Expected. While I think it is quite tolerable for the methods which have a notion for returning something I believe it brings mess and clutter to the methods which by design have no return value. Besides for me it is quite unnatural to return thing from something that is not supposed to return anything.
It should be handled with compiler diagnostics. Many compilers already emit warning diagnostics based on expected usages of certain standard library constructs. They should issue a warning for ignoring an expected<void>.