I have been tasked to write unit tests for some embedded software and since the tests are being written after the software the only viable option is to run tests on the board and print the result to the serial output.
I have written a function that will accept a Boolean statement and log the result of the test, but I am wondering if functions such as EXPECT_EQ (from GoogleTest) or CHECK_EQUAL (from Cpputest) return the result as a string or something that can be logged.
Do either of these functions (or maybe a function from another testing framework) return the result of the test so that it can be printed?
Essentially, when a test fails, I want to be able to print something like "FAILED: expected x, but got y".
Related
When I create a TEST or TEST_F test, how can I know that my assertion is actually executing?
The problem I have is, when I have an empty TEST_F, for example,
TEST_F(myFixture, test1) {}
When it runs, gtest says this test passes. I would have expected the test to fail, until I write test code. Anyway.
So, my problem is that when gtest says that when test is "OK" or that it passed, I can't trust it, because a test could "pass" if there is no test code.
It would be nice to print what my EXPECT_ or ASSERT calls are doing and then see that they pass. Problem is, if I do any std::cout calls, that seems to be out of sync with the test results at the end. The output messages are not in sync with any of my own std::cout calls.
Is there a verbose option to google test? How can I be sure the EXPECT that I coded is actually running?
You might consider looking at TDD, Test Driven Development, https://en.wikipedia.org/wiki/Test-driven_development
write one test => it will fail
write code to make the test pass => test passes
Rinse and repeat: express each requirement as a test, that initially fails. Write code to make that test pass.
I have a program that receives command line arguments (in my case, it's a Scala program that uses Argot). A simplified use case would be something like:
sbt "run -n 300 -n 50"
And imagine that the app should only accept (and print) numbers between 0 and 100, meaning it should discard 300 and print only 50.
What's the best approach to test it? Is unit testing appropriated? Instead of processing the arguments in the main maybe I should refactor a function and test the function?
If you want to test the very act of passing arguments to your program then you'll be required to perform a blackbox integration/system test (not an unit test):
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. See this.
Otherwise, do as Chris said and factor out the code to test the underlying logic without testing the act of receiving an argument.
I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.
I'm testing a set of classes and my unit tests so far are along the lines
1. read in some data from file X
2. create new object Y
3. sanity assert some basic properties of Y
4. assert advanced properties of Y
There's about 30 of these tests, that differ in input/properties of Y that can be checked. However, at the current project state, it sometimes crashes at #2 or already fails at #3. It should never crash at #1. For the time being, I'm accepting all failures at #4.
I'd like to e.g. see a list of unit tests that fail at #3, but so far ignore all those that fail at #4. What's the standard approach/terminology to create this? I'm using JUnit for Java with Eclipse.
You need reporting/filtering on your unit test results.
jUnit itself wants your tests to pass, fail, or not run - nothing in between.
However, it doesn't care much about how those results are tied to passing/failing the build, or reported.
Using tools like maven (surefire execution plugin) and some custom code, you can categorize your tests to distinguish between 'hard failures', 'bad, but let's go on', etc. But that's build validation or reporting based on test results rather than testing.
(Currently, our build process relies on annotations such as #Category(WorkInProgress.class) for each test method to decide what's critical and what's not).
What I could think of would be to create assert methods that check some system property as to whether to execute the assert:
public static void assertTrue(boolean assertion, int assertionLevel){
int pro = getSystemProperty(...);
if (pro >= assertionLevel){
Assert.assertTrue(assertion);
}
}
How can I simulate user input in the middle of a function called by a unit test (Using Python 3's unittest)? For example, I have a function foo() who's output I'm testing. In the foo() function, it asks for user input:
x = input(msg)
And the output is based on the input:
print("input: {0}".format(x))
I would like my unit test to run foo(), enter an input and compare the result with the expected result.
I have this problem regularly when I'm trying to test code that brings up dialogs for user input, and the same solution should work for both. You need to provide a new function bound to the name input in your test scope with the same signature as the standard input function which just returns a test value without actually prompting the user. Depending on how your tests and code are setup this injection can be done in a number of ways, so I'll leave that as an exercise for the reader, but your replacement method will be something simple like:
def my_test_input(message):
return 7
Obviously you could also switch on the contents of message if that were relevant, and return the datatype of your choice of course. You can also do something more flexible and general that allows for reusing the same replacement method in a number of situations:
def my_test_input(retval, message):
return retval
and then you would inject a partial function into input:
import functools
test_input_a = functools.partial(my_test_input, retval=7)
test_input_b = functools.partial(my_test_input, retval="Foo")
Leaving test_input_a and test_input_b as functions that take a single message argument, with the retval argument already bound.
Having difficulties in testing some component because of dependencies it's usually a sign of bad design. Your foo function should not depend on the global input function, but rather on a parameter. Then, when you run the program in a production environment, you wire the things in such a way that the foo is called with the result of what input returns. So foo should read:
def foo(input):
# do something with input
This will make testing much more easier. And by the way, if your tests have IO dependencies they're no longer unit tests, but rather functional tests. Have a look on Misko Hevery's blog for more insights into testing.
I think the best approach is to wrap the input function on a custom function and mock the latter. Just as described here: python mocking raw input in unittests