Python 3 unittest simulate user input - unit-testing

How can I simulate user input in the middle of a function called by a unit test (Using Python 3's unittest)? For example, I have a function foo() who's output I'm testing. In the foo() function, it asks for user input:
x = input(msg)
And the output is based on the input:
print("input: {0}".format(x))
I would like my unit test to run foo(), enter an input and compare the result with the expected result.

I have this problem regularly when I'm trying to test code that brings up dialogs for user input, and the same solution should work for both. You need to provide a new function bound to the name input in your test scope with the same signature as the standard input function which just returns a test value without actually prompting the user. Depending on how your tests and code are setup this injection can be done in a number of ways, so I'll leave that as an exercise for the reader, but your replacement method will be something simple like:
def my_test_input(message):
return 7
Obviously you could also switch on the contents of message if that were relevant, and return the datatype of your choice of course. You can also do something more flexible and general that allows for reusing the same replacement method in a number of situations:
def my_test_input(retval, message):
return retval
and then you would inject a partial function into input:
import functools
test_input_a = functools.partial(my_test_input, retval=7)
test_input_b = functools.partial(my_test_input, retval="Foo")
Leaving test_input_a and test_input_b as functions that take a single message argument, with the retval argument already bound.

Having difficulties in testing some component because of dependencies it's usually a sign of bad design. Your foo function should not depend on the global input function, but rather on a parameter. Then, when you run the program in a production environment, you wire the things in such a way that the foo is called with the result of what input returns. So foo should read:
def foo(input):
# do something with input
This will make testing much more easier. And by the way, if your tests have IO dependencies they're no longer unit tests, but rather functional tests. Have a look on Misko Hevery's blog for more insights into testing.

I think the best approach is to wrap the input function on a custom function and mock the latter. Just as described here: python mocking raw input in unittests

Related

Is there a way to pass in complex arguments into Mocked Dart Services using Mockito?

I was looking at the documentation at: https://pub.dartlang.org/packages/mockito and was trying to understand it more. It seems that in the examples, the function stubs were accepting strings, but was kind of confused as to how I was going to implement my Mocked Services.
I was curious how I would do it. The services I have is pretty simple and straight forward.
class Group{}
class GroupService {}
class MockGroupService extends Mock implements GroupService {}
final mockProviders = [new Provider(MockGroupService, useExisting: GroupService];
So you can see I am using Angular dart.
I was creating a sample group in my Test file.
group("service tests", (){
MockGroupService _mock;
testBed.addProviders([mockProviders]);
setUp(() async {
fixture = await testBed.create();
_mock = new MockGroupService();
//This is where I was going to create some stubbs for the methods
when(_mock.add()).thenReturn((){
return null; //return the object.
});
//create additional when statements for edit, delete, etc.
});
});
So what i was thinking is that there would be an argument passed into add (or 2).... how would I properly code that in the when statement, and how do those 2 arguments reflect in the then statement?
Essentially, I was wanting to do a test with a complex class.. and pass it into add. Then it would just process it accordingly and return it.
Do i pass into the arguments something akin to: (using pseudocode)
when(_mock.add(argThat(hasType(Group)))).thenReturn((Group arg)=> arg);
or something similar? hasType isnt function, so im not 100% sure how to approach this design. Ideally, Im trying create the Group in the test, and then pass it into the add function accordingly. It just seems that the examples were showing Strings.
Yes mockito allows objects to be passed you can see examples in the test.
It is a bit hard to follow but you can see here that it uses deep equality to check if arguments are equal if no matchers are specified.
The second part of your question is a bit more complex. If you want to use the values that were passed into your mock as part of your response then you need to use thenAnswer. It provides you with an Invocation of what was just called. From that object you can get and return any arguments that were used in the method call.
So for your add example if you know what is being passing in and have complete access to it I would write:
Group a = new Group();
when(_mock.add(a)).thenReturn(a);
If the Group object is being created by something else I would write:
when(_mock.add(argThat(new isInstanceOf<Group>()))
.thenAnswer((invocation)=>invocation.positionalArguments[0]);
Or if you don't really care about checking for the type. Depending on what checks you are using for your test the type might already be checked for you.
when(_mock.add(any)).thenAnswer(
(invocation)=>invocation.positionalArguments[0]);
Or if you are using Dart 2.0:
when(_mock.add(typed(any))).thenAnswer(
(invocation)=>invocation.positionalArguments[0]);

Mockito Matchers not working

I have these two lines at different points of my code:
Message<T1> reply = (Message<T1>) template.sendAndReceive(channel1, message);
Message<T2> reply = (Message<T2>) template.sendAndReceive(channel2, message);
I am doing some unit testing and the test covers both statements. When I try to mock the behaviour, I define some behaviour like this:
Mockito.when(template.sendAndReceive(Mockito.any(MessageChannel.class), Matchers.<GenericMessage<T1>>any() )).thenReturn(instance1);
Mockito.when(template.sendAndReceive(Mockito.any(MessageChannel.class), Matchers.<GenericMessage<T2>>any() )).thenReturn(null);
When I execute the unit tests and do some debugging , the first statement returns null
Do you have any idea what the matchers seem not to work ? and it always takes the last definition of the mock . I am using Mockito 1.1.10
When I execute the unit tests and do some debugging , the first
statement returns null
This happened because you did stub the same method invocation twice with thenReturn(..); and the last one with null won.
The proper way to achieve your goal is to provide a list of consecutive return values to be returned when the method is called:
Mockito.when(template.sendAndReceive(Matchers.any(MessageChannel.class), Matchers.any(GenericMessage.class)))
.thenReturn(instance1, null);
In this case, the returned value for the first invocation will be instance1, and all subsequent invocations will return null. See an example here.
Another option, as Ashley Frieze suggested, would be making template.sendAndReceive return different values based on arguments:
Mockito.when(template.sendAndReceive(Matchers.same(channel1), Matchers.any(GenericMessage.class)))
.thenReturn(instance1);
Mockito.when(template.sendAndReceive(Matchers.same(channel2), Matchers.any(GenericMessage.class)))
.thenReturn(null);
Or even shorter, we can omit second line, because default return value for unstubbed mock method invocations is null:
Mockito.when(template.sendAndReceive(Matchers.same(channel1), Matchers.any(GenericMessage.class)))
.thenReturn(instance1);
Here we are assume that some channel1 and channel2 are in scope of test class and are injected into object under test (at least it seems so from code snippet you provided in the question).

Tweaking an output before testing using MRUnit

I am using creating an application using mapreduce2 and testing the same using MRUnit 1.1.0. In one of the tests, I am checking the output of a reducer which puts the 'current system time' in it's output. I.e. at the time the reducer executes, the timestamp is used in context.write() to be written along with the rest of the output. Now even though I use the same method to find the system time in the test method as the one I use in the reducer, the times calculated in both are generally a second apart, like 2016-05-31 19:10:02 and 2016-05-31 19:10:01. So the output in the two turns out to be different, example:
test_fe01,2016-05-31 19:10:01
test_fe01,2016-05-31 19:10:02
This causes an assertion error. I wish to ignore this difference in timestamps, so the tests pass if the rest of the output apart from the timestamp is matched. I am looking for a way to mock the method used for returning the system time, so that a hardcoded value is returned, and the reducer and the test both use this mocked method during the test. Is this possible to do? Any help will be appreciated.
Best Regards
EDIT: I have already tried Mockito's spy functionality in the my test:
MClass mc = Mockito.spy(new MClass());
Mockito.when(mc.getSysTime()).thenReturn("FakeTimestamp");
However, this gives a runtime error:
org.mockito.exceptions.misusing.MissingMethodInvocationException:
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
Also, this error might show up because:
1. you stub either of: final/private/equals()/hashCode() methods.
Those methods *cannot* be stubbed/verified.
2. inside when() you don't call method on mock but on some other object.
3. the parent of the mocked class is not public.
It is a limitation of the mock engine.
The method getSysTime() is public and static and class MClass is public and doesn't have any parent classes.
Assuming i understand your question, you could pass a time into the Reduce using the configuration object. In your reduce you could check if this configuration is set and use it, otherwise you use the system time.
This way you can pass in a known value for testing and assert you get that same value back.

How to test command line arguments processing

I have a program that receives command line arguments (in my case, it's a Scala program that uses Argot). A simplified use case would be something like:
sbt "run -n 300 -n 50"
And imagine that the app should only accept (and print) numbers between 0 and 100, meaning it should discard 300 and print only 50.
What's the best approach to test it? Is unit testing appropriated? Instead of processing the arguments in the main maybe I should refactor a function and test the function?
If you want to test the very act of passing arguments to your program then you'll be required to perform a blackbox integration/system test (not an unit test):
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. See this.
Otherwise, do as Chris said and factor out the code to test the underlying logic without testing the act of receiving an argument.

Best practices in unit-test writing

I have a "best practices" question. I'm writing a test for a certain method, but there are multiple entry values. Should I write one test for each entry value or should I change the entryValues variable value, and call the .assert() method (doing it for all range of possible values)?
Thank you for your help.
Best regards,
Pedro Magueija
edited: I'm using .NET. Visual Studio 2010 with VB.
If one is having to write many tests which vary only in initial input and final output one should use a data driven test. This allows you to define the test once along with a mapping between input and output. The unit testing framework will then interpret it as being one test per case. How to actually do this depends on which framework you are using.
It's better to have separate unit tests for each input/output sets covering the full spectrum of possible values for the method you are trying to test (or at least for those input/output sets that you want to unit test).
Smaller tests are easier to read.
The name is part of the documentation of the test.
Separate methods give a more precise indication of what has failed.
So if you have a single method like:
void testAll() {
// setup1
assert()
// setup2
assert()
// setup3
assert()
}
In my experience this gets very big very quickly, and so becomes hard to read and understand, so I would do:
void testDivideByZero() {
// setup
assert()
}
void testUnderflow() {
// setup
assert()
}
void testOverflow() {
// setup
assert()
}
Should I write one test for each entry
value or should I change the
entryValues variable value, and call
the .assert() method (doing it for all
range of possible values)?
If you have one code path typically you do not test all possible inputs. What you usually want to test are "interesting" inputs that make good exemplars of the data you will get.
For example if I have a function
define add_one(num) {
return num+1;
}
I can't write a test for all possible values so I may use MAX_NEGATIVE_INT, -1, 0, 1, MAX_POSITIVE_INT as my test set because they are a good representatives of interesting values I might get.
You should have at least one input for every code path. If you have a function where every value corresponds to a unique code path then I would consider writing a tests for the complete range of possible values. And example of this would be a command parser.
define execute(directive) {
if (directive == 'quit') { exit; }
elsif (directive == 'help') { print help; }
elsif (directive == 'connect') { intialize_connection(); }
else { warn("unknown directive"); }
}
For the purpose of clarity I used elifs rather than a dispatch table. I think this make it's clear that each unique value that comes in has a different behavior and therefore you would need to test every possible value.
Are you talking about this difference?
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
}
- (void) testSomething2
{
[foo callBarWithValue:y];
assert…
}
vs.
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
[foo callBarWithValue:y];
assert…
}
The first version is better in that when a test fails, you’ll have better idea what does not work. The second version is obviously more convenient. Sometimes I even stuff the test values into a collection to save work. I usually choose the first approach when I might want to debug just that single case separately. And of course, I only choose the latter when the test values really belong together and form a coherent unit.
you have two options really, you don't mention which test framework or language you are using so one may not be applicable.
1) if your test framework supports it use a RowTest, MBUnit and Nunit support this if you're using .NET this would allow you to put multiple attributes on your method and each line would be executed as a separate test
2) If not write a test per condition and ensure you give it a meaningful name so that if (when) the test fails you can find the problem easily and it means something to you.
EDIT
Its called TestCase in Nunit Nunit TestCase Explination