Output additional information when tests fail - unit-testing

One of my test containing Assert.Equal(2, list.Count); fails on Appveyor, a continuous integration server, but I cannot reproduce the failure on my local machine.
I hope to get more information from the error message, but don't know how to do.
The authors of xUnit.net insist they should not allow users to specify custom error messages, see https://github.com/xunit/xunit/issues/350 . That's why there is not API allowing me to write eg. Assert.Equal(2, list.Count, "The content of the list is " + ...);
I also looked at Fluent Assertions. If I write list.Should().HaveCount(3, "the content of the list is " + ...);, the output reads as
Expected collection to contain 3 item(s) because the content of the list is
..., but found 2.
The "because" clause doesn't make sense in English grammar. The because parameter seems to be used to describe expected behavior, not actual behavior.
Considering xUnit.net and Fluent Assertions both discourage us from providing additional information regarding the failure, is outputting additional information when tests fail a good way to debug remote errors?
What's the best way to output additional information?

If you want to see the actual contents of the list for debugging purposes and you're using Fluent Assertions, you can do this:
using (new AssertionScope(Formatter.ToString(list))
{
list.Should().HaveCount(3);
}
The assertion scope will replace the collection part of the message with something else. It's not nice, but will work.
Alternatively, you could exploit the because parameter like this:
list.Should().HaveCount(3, "because I expected list {0} to contain that many items", list);
FA will format every placeholder in the because phrase using that same Formatter.String.

Related

SpecFlow wrongly maps identifiers with a number

I am implementing testing with SpecFlow and I have this annoying situation: the System name is System33 and whenever I make a reference to the system name, SpecFlow tries to bind "33" as a parameter. For example:
Given I am a valid System33 user logged in the system
Is bound to this step:
[Given(#"I am a valid System(.*) user logged in the system")]
public void GivenIAmAValidSystemUserLoggedInTheSystem(int p0)
This is quite anoying because I received the specs from another department and they constantly mention "System33".
I have tried to find a way to override this standard behavior but the documentation is frankly poor.
http://specflow.org/documentation/Using-Gherkin-Language-in-SpecFlow/
Does anybody know a way to tell SpecFlow that a number at the end of a word is NOT a parameter value?
It's pretty simple to explain, your step
[Given(#"I am a valid System(.*) user logged in the system")]
will result in a Regex that returns a group marked by the (). This group is what is passed into the args of your step binding.
However a Regex of
"I am a valid (System.*) user logged in the system"
will instead return a group with System33. You can see this by using a Regex checker, like http://derekslager.com/blog/posts/2007/09/a-better-dotnet-regular-expression-tester.ashx
Where a Source of Given I am a valid System33 user logged in the system and a Pattern I am a valid (System.*) user logged in the system
gives results of
Result
Found 1 match:
I am a valid System33 user logged in the system has 1 group:
System33
String literals for use in programs:
C#
#"I am a valid (System.*) user logged in the system"
The code is generated by the skeleton code generator which is part of the Specflow VS Integration. It tries to guess what could be parameters in a step and as most of the time numbers are parameters in steps, we see numbers always as parameters.
You can't configure the behavior of the generator, but change simply the string. This is a regex and can be changed to what you want. The generator generates only a suggestion and starting point for you.

TDD and "honesty" of test

I have a concern with "honesty" of test when doing TDD. TDD is
Write red test
Write just enough code to make it green
Refactor and let the test green
So far so good. Now here is an example of applying the principle above, such kind of example were already met in tutorial & real life :
I want to check that the current user email is displayed on the default page of my webapp.
Write a red test : "example#user.com" is displayed inside default_page.html
Write just enough code to make it green : hardcode "example#user.com" inside default_page.html
Refactor by implementing get_current_user(), some other code in some others layers etc, letting the test green.
I'm "shocked" by step 2. There is something wrong here : the test is green even if nothing is actually working. There a test smell here, it means that maybe at some point someone could break the production code without breaking the test suite.
What I am missing here ?
Your assertion that "nothing is working" is false. The code functions correctly for the case that the email address is example#user.com. And you do not need that final refactoring. Your next failing test might be to make it fail for the case that the user has a different email address.
I would say that what you have is only partially complete. You said:
I want to check that the current user email is displayed on the default page of my webapp.
The test doesn't check the current users email address on the default page, it checks that the fixed email address "example#user.com" is in the page.
To address this you either need to provide more examples (ie have multiple tests with different email addresses) or to randomly generate the email address in the test setup.
So I would say what you have is something like this is pseudo code:
Given current user has email address "example#user.com"
When they visit the default page
The page should contain the email address "example#user.com"
This is the first test you can write in TDD and you can indeed hardcode this to avoid implementing unnecessary stuff. You can now add another test which will force you to implement the correct behavior
Given current user has email address "example2#user.com"
When they visit the default page
The page should contain the email address "example2#user.com"
Now you have to remove the hardcoding as you cannot satisfy both of these tests with a hardcoded solution.So this will force you to get the actual email address from the current user and display this.
Often it makes sense to end up with 3 examples in your tests. These don't need to be 3 separate tests, you can use data driven tests to reuse the same test method with different values. You don't say what test framework you are using, so I can't give a specific example.
This approach is common in TDD and is called triangualtion.
You are correct about
step 2. There is something wrong here
but it's not in the TDD approach. IMHO it's in the test logic. After all this (step 2) validates that the test harness is working correctly. That the new test does not mistakenly pass without requiring any new code, and that the required feature does not already exist.
What I am missing here ?
This step also should tests the test itself, in the negative: it rules out the possibility that the new test always passes, and therefore is worthless. The new test should also fail for the expected reason. It's vital that this step increases the developer's confidence that it is testing the right thing, and passes only in intended cases.

National Weather Service (NWS) Valid Time Event Code (VTEC) Parser Regular Expression (Regex)

The National Weather Service (NWS) embeds machine readable components in its text bulletins and syndicated format feeds, called Valid Time Event Code (VTEC).
More information on VTEC http://www.nws.noaa.gov/os/vtec/
Example of Text Bulletins: http://www.nws.noaa.gov/view/national.php?prodtype=allwarnings
I am developing a parser to interpret a sequence of VTECs embedded within an NWS bulletin and have a regular expression to capture the logic, that I am happy to share, see below, but not 100% sure if I am doing this right.
Specifically,
1. Is there any specification on how many VTECs may be embedded in any one NWS message (or its update)? Usually seeing just one, but if there are multiple, what is the hierarchy, if any - does the last one cancel the previous? Or, do all the VTECs have the same weight?
2. If a Hydrological or H-VTEC is issued, is it always immediately following a P-VTEC?
3. Is there a "parent-child" relationship, in the XML document sense, between an H-VTEC element and P-VTEC element?
4. Can the VTEC be used as a unique identifier for a message or its update? If not, what would be the "primary key" in the database sense? Could perhaps a hash of the VTEC along with bulletin update date be used? Or is any other combination of fields recommended?
The following regular expression is able to pick up the VTEC, assuming any number of P-VTECs may be released and if there is an H-VTEC it will always be preceded by a "parent" P-VTEC.
[/][OTEX][.](NEW|CON|EXT|EXA|EXB|UPG|CAN|EXP|COR|ROU)[.][\w]{4}[.][A-Z][A-Z][.][WAYSFON][.][0-9]{4}[.][0-9]{6}[T][0-9]{4}[Z][-][0-9]{6}[T][0-9]{4}[Z][/]([^/]*[/][\w]{5}[.][[N0-3U]][.][A-Z][A-Z][.][0-9]{6}[T][0-9]{4}[Z][.][0-9]{6}[T][0-9]{4}[Z][.][0-9]{6}[T][0-9]{4}[Z][.](NO|NR|UU|OO)[/])?
The VTEC is described in more detail at: http://www.nws.noaa.gov/directives/sym/pd01017003curr.pdf
In case the link expires, this may be found also by drilling down as follows:
NWS directives. http://www.nws.noaa.gov/directives/
(Click on) Operations and Services.
(Scroll Down) Dissemination.
After reading the document, answers to #2 and #3 are a resounding YES. H-VTEC will always be supplemental to an immediately preceding P-VTEC. Regarding #1, multiple P-VTECs are possible and the logic is probably more complex than regex can weed out. Regarding #4, the answer is almost certainly NO, mainly because VTEC could be missing in an NWS bulletin, so does not classify as a primary key.
So the regex needed to parse out a VTEC string, thanks to Suamere, is most likely:
/[OTEX]\.(NEW|CON|EXT|EXA|EXB|UPG|CAN|EXP|COR|ROU)\.\w{4}\.[A-Z]{2}\.[WAYSFON]\.\d{4}\.\d{6}T\d{4}Z-\d{6}T\d{4}Z/([^/]*/\w{5}\.[N0-3U]\.[A-Z]{2}\.\d{6}T\d{4}Z\.\d{6}T\d{4}Z\.\d{6}T\d{4}Z\.(NO|NR|UU|OO)/)?

What is correct name for something in unit-test that must "provoke" functions to errors?

By 'something' I mean input data, that potentially can lead function to unexpected behavior, primarily, and other things like that, which purpose is to test 'negative' conditions for a function.
ps
By the way - what name also used for 'positive' things?
I'm not sure whether there's a specific, "official" name for data like this, other than something rather generic like "test case." You could potentially get more specific with something like "positive test case" and "negative test case."
However, I once worked on a team that handled a lot of email messages, and boy is email data messy ! Our system would periodically receive an email message that would somehow bring the entire system down, so we started saving these messages in our test database with the label "messages of death." We would run all known messages of death through our code during testing to make sure the system would stay up in the face of these malformed inputs.

Web Service Unit Testing

This is an interesting question I am sure a lot of people will benefit from knowing.
a typical web service will return a serialized complex data type for example:
<orgUnits>
<orgUnit>
<name>friendly name</orgUnit>
</orgUnit>
<orgUnit>
<name>friendly name</orgUnit>
</orgUnit>
</orgUnits>
The VS2008 unit testing seems to want to assert for an exact match of the return object, that is if the object (target and actual) are identical in terms of structure and content.
What I would like to do instead is assert only if the structure is fine, and no errors exist.
To perhaps simplify the matter, in the web service method, if any error occurs I throw a SOAPException.
1.Is there a way to test just based on the return status
2. Best case scenario would be to compare the doc trees for structural integrity of target and actual, and assert based on the structure being sound, and not the content.
Thanks in advance :)
I think that this is a duplicate of WSDL Testing
In that answer I suggested SoapUI as a good tool to use
An answer specific to your requirement would be to compare the serialized versions(to XML) of the object instead of the objects themselves.
Approach in your test case
Lets say you are expecting a return like expected.xml from your webservice .
Invoke service and get an actual Obj. Serialize that to an actual.xml.
Compare actual.xml and expected.xml by using a library like xmldiff(compares structural /value changes at XML level).
Based on the output of xmldiff determine whether the webservice passed the test.