Unit testing a method with many possible outcomes - unit-testing

I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as:
address
address port
address port path
address path
address action
address path action
address port action
address port path action
address action params
address path action params
address port action params
address port path action params
andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output.
However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this?
(rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant)
(note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

Since you normally would expect only one unique outcome, no matter what order is given the parameters, I suggest one test for all the possibilities. In my code sample, I use NUnit Testing Framework, so it is yours to find out how to make the equivalent test with your testing framework.
[TestCase("http://www.url.com")]
[TestCase("http://www.url.com", 21)]
[TestCase("http://www.url.com", 24, #"c:\path")]
public void TestingMethod(string address, params object[] address) {
// Do your tests accordingly here...
}
So, the TestCaseAttribute (using NUnit), is the right tool for the job.
Sure you'll need to determine what parameter value is at what index of the parameter array. I make it an object[], since I suppose that the different parameters have different data type as well, and since we cannot determine the right order from start, then you'll have to find it out for yourself, though using the polymorphism.

Related

Get id of nonexistent object for tests - django

I'm using DjangoModelFactory to create custom objects needed for tests. The issue is that I would like to write test for situation where there is a request for nonexistent object. Is there any elegant way to automatically generate such ID. One idea I have had was to use really big number, as test suite should never reach such number. Another idea is to create object, store ID and then delete the object. Both those solutions are somewhat hacky - firs has rather risky assumption that the big number will not be assigned as ID as part of tests, the other is based on additional logic which makes tests more dependent on non-related logic. Is there any simple and out-of-the box solution to get id which is not assigned to any object?

TDD and "honesty" of test

I have a concern with "honesty" of test when doing TDD. TDD is
Write red test
Write just enough code to make it green
Refactor and let the test green
So far so good. Now here is an example of applying the principle above, such kind of example were already met in tutorial & real life :
I want to check that the current user email is displayed on the default page of my webapp.
Write a red test : "example#user.com" is displayed inside default_page.html
Write just enough code to make it green : hardcode "example#user.com" inside default_page.html
Refactor by implementing get_current_user(), some other code in some others layers etc, letting the test green.
I'm "shocked" by step 2. There is something wrong here : the test is green even if nothing is actually working. There a test smell here, it means that maybe at some point someone could break the production code without breaking the test suite.
What I am missing here ?
Your assertion that "nothing is working" is false. The code functions correctly for the case that the email address is example#user.com. And you do not need that final refactoring. Your next failing test might be to make it fail for the case that the user has a different email address.
I would say that what you have is only partially complete. You said:
I want to check that the current user email is displayed on the default page of my webapp.
The test doesn't check the current users email address on the default page, it checks that the fixed email address "example#user.com" is in the page.
To address this you either need to provide more examples (ie have multiple tests with different email addresses) or to randomly generate the email address in the test setup.
So I would say what you have is something like this is pseudo code:
Given current user has email address "example#user.com"
When they visit the default page
The page should contain the email address "example#user.com"
This is the first test you can write in TDD and you can indeed hardcode this to avoid implementing unnecessary stuff. You can now add another test which will force you to implement the correct behavior
Given current user has email address "example2#user.com"
When they visit the default page
The page should contain the email address "example2#user.com"
Now you have to remove the hardcoding as you cannot satisfy both of these tests with a hardcoded solution.So this will force you to get the actual email address from the current user and display this.
Often it makes sense to end up with 3 examples in your tests. These don't need to be 3 separate tests, you can use data driven tests to reuse the same test method with different values. You don't say what test framework you are using, so I can't give a specific example.
This approach is common in TDD and is called triangualtion.
You are correct about
step 2. There is something wrong here
but it's not in the TDD approach. IMHO it's in the test logic. After all this (step 2) validates that the test harness is working correctly. That the new test does not mistakenly pass without requiring any new code, and that the required feature does not already exist.
What I am missing here ?
This step also should tests the test itself, in the negative: it rules out the possibility that the new test always passes, and therefore is worthless. The new test should also fail for the expected reason. It's vital that this step increases the developer's confidence that it is testing the right thing, and passes only in intended cases.

Unit Tests: How to Assert? Asserting results returned or that a method was called on a mock?

I am trying to find out the best way to Assert, should I be creating an object with what i should return and check that it has equal to the expected result ?
Or should I be running a method against a mock to ensure that the method was actually called.
I have seen this done both ways, I wondered if anyone has any best practices for this.
Of course, it's quicker and easier to write a unit test to assert that a method was called on the mock but quicker and easier is not always the best way - although sometimes it can be.
What does everyone assert on, that a method has been called or assert the results that were returned ?
Of course its not best practice to do more than 1 assert in a unit test so maybe the answer is to actually assert the results and that the method was called ? So I would create 2 unit tests, 1 to check the results and 1 to check that the method was called.
But now thinking about this, maybe this is going too far, if I get a result that I suppose I can assume that my mock method was called.
In between testing that a method has been called and testing the value that it returns is another possibly more important test: that it was called with the correct parameters.
A very common case these days would a method you're writing that uses some HTTP library to retrieve data from a REST API. You don't want to actually make HTTP requests in your tests so you mock the HTTP client get() method. On the one hand your mock might just return some canned JSON response like (Using RSpec in Ruby as an example):
http_mock.stub(:get).and_return('{result: 5}')
You then test that you can properly parse this and return some correct value based on the response.
You could also test that the HTTP get() method is called, but it's more important to test that it's called with the correct parameters. In the case of an API, your method probably has to format a URL with query parameters and you need to test that it does that correctly. The assertion would look something like (again with RSpec):
http_mock.should_receive(:get).with('http:/example.com/some_endpoint?some_param=x')
This test is really a prerequisite to the previous test, and simply testing that get() was called wouldn't confirm much. For instance, it would tell you if you were incorrectly formatting the URL.

Web Service Unit Testing

This is an interesting question I am sure a lot of people will benefit from knowing.
a typical web service will return a serialized complex data type for example:
<orgUnits>
<orgUnit>
<name>friendly name</orgUnit>
</orgUnit>
<orgUnit>
<name>friendly name</orgUnit>
</orgUnit>
</orgUnits>
The VS2008 unit testing seems to want to assert for an exact match of the return object, that is if the object (target and actual) are identical in terms of structure and content.
What I would like to do instead is assert only if the structure is fine, and no errors exist.
To perhaps simplify the matter, in the web service method, if any error occurs I throw a SOAPException.
1.Is there a way to test just based on the return status
2. Best case scenario would be to compare the doc trees for structural integrity of target and actual, and assert based on the structure being sound, and not the content.
Thanks in advance :)
I think that this is a duplicate of WSDL Testing
In that answer I suggested SoapUI as a good tool to use
An answer specific to your requirement would be to compare the serialized versions(to XML) of the object instead of the objects themselves.
Approach in your test case
Lets say you are expecting a return like expected.xml from your webservice .
Invoke service and get an actual Obj. Serialize that to an actual.xml.
Compare actual.xml and expected.xml by using a library like xmldiff(compares structural /value changes at XML level).
Based on the output of xmldiff determine whether the webservice passed the test.

How can I best write unit test cases for a Parser?

I am writing a parser which generates the 32 bit opcode for each command. For example, for the following statement:
set lcl_var = 2
my parser generates the following opcodes:
// load immdshort 2 (loads the value 2)
0x10000010
// strlocal lclvar (lcl_var is converted to an index to identify the var)
0x01000002
Please note that lcl_var can be anything i.e., any variable can be given. How can I write the unit test cases for this? Can we avoid hard coding the values? Is there a way to make it generic?
It depends on how you structured your parser. A Unit-Test tests a single UNIT.
So, if you want to test your entire parser as a single unit, you can give it a list of commands and verify it produces the correct opcodes (which you checked manually when you wrote the test). You can write tests for each command, and test the normal usage, edge-case usage, just-beyond-edge-case usage. For example, test that:
set lcl_var = 2
results in:
0x10000010
0x01000002
And the same for 0, -1, MAX_INT-1, MAX_INT+1, ...
You know the correct result for these values. Same goes for different variables.
If your question is "How do I run the same test with different inputs and expected values without writing one xUnit test per input-output combination?"
Then the answer to that would be to use something like the RowTest NUnit extension. I wrote a quick bootup post on my blog recently.
An example of this would be
[TestFixture]
public class TestExpression
{
[RowTest]
[Row(" 2 + 3 ", "2 3 +")]
[Row(" 2 + (30 + 50 ) ", "2 30 50 + +")]
[Row(" ( (10+20) + 30 ) * 20-8/4 ", "10 20 + 30 + 20 * 8 4 / -")]
[Row("0-12000-(16*4)-20", "0 12000 - 16 4 * - 20 -")]
public void TestConvertInfixToPostfix(string sInfixExpr, string sExpectedPostfixExpr)
{
Expression converter = new Expression();
List<object> postfixExpr = converter.ConvertInfixToPostfix(sInfixExpr);
StringBuilder sb = new StringBuilder();
foreach(object term in postfixExpr)
{
sb.AppendFormat("{0} ", term.ToString());
}
Assert.AreEqual(sExpectedPostfixExpr, sb.ToString().Trim());
}
int[] opcodes = Parser.GetOpcodes("set lcl_var = 2");
Assert.AreEqual(2, opcodes.Length);
Assert.AreEqual(0x10000010, opcodes[0]);
Assert.AreEqual(0x01000002, opcodes[1]);
You don't specify what language you're writing the parser in, so I'm going to assume for the sake of argument that you're using an object-oriented language.
If this is the case, then dependency injection could help you out here. If the destination of the emitted opcodes is an instance of a class (like File, for instance), try giving your emitter class a constructor that takes an object of that type to use as the destination for emitted code. Then, from a unit test, you can pass in a mock object that's an instance of a subclass of your destination class, capture the emitted opcodes for specific statements, and assert that they are correct.
If your destination class isn't easily extensible, you may want to create an interface based on it that both the destination class and your mock class can implement.
As I understand it, you would first write a test for your specific example, i.e. where the input to your parser is:
set lcl_var = 2
and the output is:
0x10000010 // load immdshort 2
0x01000002 // strlocal lclvar
When you have implemented the production code to pass that test, and refactored it, then if you are not satisified it could handle any local variable, write another test with a different local variable and see if it passes or not. e.g. new test with input:
set lcl_var2 = 2
And write your new test to expect the different output that you want. Keep doing this until you are satisfied that your production code is robust enough.
It's not clear if you are looking for a methodology or a specific technology to use for your testing.
As far as methodology goes maybe you don't want to do extensive unit testing. Perhaps a better approach would be to write some programs in your domain specific language and then execute the opcodes to produce a result. The test programs would then check this result. This way you can exercise a bunch of code, but check only one result at the end. Start with simple ones to flush out obvious bugs and the move to harder ones. Instead of checking the generated opcodes each time.
Another approach to take is to automatically generate programs in your domain specific language along with the expected opcodes. This can be very simple like writing a perl script that produces a set of programs like:
set lcl_var = 2
set lcl_var = 3
Once you have a suite of test programs in your language that have correct output you can go backwards and generate unit tests that check each opcode. Since you already have the opcodes it becomes a matter of inspecting the output of the parser for correctness; reviewing its code.
While I've not used cppunit, I've used an in-house tool that was very much like cppunit. It was easy to implement unit tests using cppunit.
What do you want to test? Do you want to know whether the correct "store" instruction is created? Whether the right variable is picked up? Make up your mind what you want to know and the test will be obvious. As long as you don't know what you want to achieve, you will not know how to test the unknown.
In the meantime, just write a simple test. Tomorrow or some later day, you will come to this place again because something broke. At that time, you will know more about what you want to do and it might be more simple to design a test.
Today, don't try to be the person you will be tomorrow.