How to test command line arguments processing - unit-testing

I have a program that receives command line arguments (in my case, it's a Scala program that uses Argot). A simplified use case would be something like:
sbt "run -n 300 -n 50"
And imagine that the app should only accept (and print) numbers between 0 and 100, meaning it should discard 300 and print only 50.
What's the best approach to test it? Is unit testing appropriated? Instead of processing the arguments in the main maybe I should refactor a function and test the function?

If you want to test the very act of passing arguments to your program then you'll be required to perform a blackbox integration/system test (not an unit test):
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. See this.
Otherwise, do as Chris said and factor out the code to test the underlying logic without testing the act of receiving an argument.

Related

How to print C++ unit test result during runtime

I have been tasked to write unit tests for some embedded software and since the tests are being written after the software the only viable option is to run tests on the board and print the result to the serial output.
I have written a function that will accept a Boolean statement and log the result of the test, but I am wondering if functions such as EXPECT_EQ (from GoogleTest) or CHECK_EQUAL (from Cpputest) return the result as a string or something that can be logged.
Do either of these functions (or maybe a function from another testing framework) return the result of the test so that it can be printed?
Essentially, when a test fails, I want to be able to print something like "FAILED: expected x, but got y".

How to assert which coroutine dispatcher is used in unit test?

I have a class to unit test that has 2 dispatchers injected, one that is Dispatchers.Main and one that is Dispatchers.IO. My code switches between these two via withContext() at some point and then switches back via .collect(). I would like my unit test for this code to make an assertion on what dispatcher I am supposed to be on. My test code currently just uses two instances of TestCoroutineDispatcher.
Does anyone know how I can do this?
One ok thing I have tried is modifying my production code to include launch(mainDispatcher + CoroutineName("someUniqueName")) { ... } where I can then assert on looking for the substring in Thread.currentThread().name in my unit test, but it really does not feel ideal and rather hacky to modify my prod code just for that.

Running test case for a duration of time in Robot Framework

I wanted to run robot test for a duration of time, say for 1hr. No matter if the execution of all test cases in a test suite is completed. It should repeat the test cases until the given time reached.
I tried to use --prerunmodifier and tried to write my own module, I used robot.api module robot.running.context, and override the present method end_suite(). But not successful! :(
Try with 'Repeat Keyword' keyword. It takes as argument for how long it should repeat given keyword. But in this case all your test cases should go to one keyword.
Use 'Run Keyword And Ignore Error' inside of it so you ignore errors.
E.g.:
Repeat Keyword 2h Keyword With All Test Cases
Second option would be writing a Listener - has similar functionality as prerun modifier but is executed during tests not before.
As per the Robot framework user guide the robot framework test suite will be executed for a maximum time of 120 minutes i.e. 2hrs. Not we can overwrite this timeout by explicitly mentioning the test execution time in the test files Settings table as below
***Setting***
Test Timeout 60 minutes
Further you are most welcome to set a test cases specific timeout using [Timeout] option as below
***Test Cases***
Sample Test Case
[Timeout] 5 minutes
Do some testing
Validate some results
Feel free to try and ask further questions.

Two assert in the same unit test method, how to make?

I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.

Python 3 unittest simulate user input

How can I simulate user input in the middle of a function called by a unit test (Using Python 3's unittest)? For example, I have a function foo() who's output I'm testing. In the foo() function, it asks for user input:
x = input(msg)
And the output is based on the input:
print("input: {0}".format(x))
I would like my unit test to run foo(), enter an input and compare the result with the expected result.
I have this problem regularly when I'm trying to test code that brings up dialogs for user input, and the same solution should work for both. You need to provide a new function bound to the name input in your test scope with the same signature as the standard input function which just returns a test value without actually prompting the user. Depending on how your tests and code are setup this injection can be done in a number of ways, so I'll leave that as an exercise for the reader, but your replacement method will be something simple like:
def my_test_input(message):
return 7
Obviously you could also switch on the contents of message if that were relevant, and return the datatype of your choice of course. You can also do something more flexible and general that allows for reusing the same replacement method in a number of situations:
def my_test_input(retval, message):
return retval
and then you would inject a partial function into input:
import functools
test_input_a = functools.partial(my_test_input, retval=7)
test_input_b = functools.partial(my_test_input, retval="Foo")
Leaving test_input_a and test_input_b as functions that take a single message argument, with the retval argument already bound.
Having difficulties in testing some component because of dependencies it's usually a sign of bad design. Your foo function should not depend on the global input function, but rather on a parameter. Then, when you run the program in a production environment, you wire the things in such a way that the foo is called with the result of what input returns. So foo should read:
def foo(input):
# do something with input
This will make testing much more easier. And by the way, if your tests have IO dependencies they're no longer unit tests, but rather functional tests. Have a look on Misko Hevery's blog for more insights into testing.
I think the best approach is to wrap the input function on a custom function and mock the latter. Just as described here: python mocking raw input in unittests