Unit test lack on internet connectivity - unit-testing

Background
I would like to use FetchMock and Chai/Mocha to write a unit test for a feature I've written.
I have a wrapper around fetch that causes it to return a response with a (specific) failed code if there's a network failure, instead of rejecting.
The code itself works. I can hand test it by bringing down the wifi on my machine by hand while the code is running.
I have reason to expect this code will be refactored, someday, by someone. So I would like some unit tests around it.
The Question
How do I use Chai/Mocha, and any other tools (like fetchMock which I'm currently using) to create a test around that scenario?
I can't figure out how to fake a network failure from within a unit test.

Related

Restart appdomain for each test

I realize it may sound like an odd request, and it certainly will not do wonders for test performance, but it's critical that I get a new AppDomain for the start of each unit test.
Currently I'm using xUnit and Resharper as the test runner. But I'm willing to change if there's a different framework that would yield the behaviour that I need.
The xunit resharper runner doesn't have this kind of functionality, and I don't know any test framework that does this out of the box. If you need each test to run in a new AppDomain, I'd write it so that each test created a new AppDomain and ran some custom code in there.
You could probably use some of xunit's features to make this a little easier - the BeforeAfterTestAttribute allows you to run code before and after, or you could pass in a fixture that provides functionality to setup/teardown the AppDomain.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

unit test a servlet with an embedded Jetty

How can we unit test a servlet with an embedded Jetty server?
For example, how to test the servlet method below?
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
//any logic inside
}
I vastly prefer testing servlets with an embedded instance of jetty using something like junit to bootstrap it.
http://git.eclipse.org/c/jetty/org.eclipse.jetty.project.git/tree/examples/embedded/src/main/java/org/eclipse/jetty/embedded/MinimalServlets.java
that is the minimal example of how to do it.
This is also how we test the vast majority of jetty itself, starting it up and running it through its paces.
For a specific servlet or handler we often use the jetty-client or a SimpleRequest in our jetty-test-helper artifact. A URLConnection works as well.
http://git.eclipse.org/c/jetty/org.eclipse.jetty.toolchain.git/tree/jetty-test-helper/src/main/java/org/eclipse/jetty/toolchain/test/SimpleRequest.java
Here is a test in the jetty-client, it is for jetty-9 so if you want 7 or 8 then look under the corresponding tag, it was refactored quite a bit in jetty-9.
http://git.eclipse.org/c/jetty/org.eclipse.jetty.project.git/tree/jetty-client/src/test/java/org/eclipse/jetty/client/HttpClientTest.java
Note: I recommend you pass 0 as the port for jetty to start up with and that will give you an random open port which you can then pull out of jetty for testing purposes, this avoids the situation where multiple builds are running on CI or parallel builds where there might be a port conflict.
You don't need Jetty to test the servlet, you need a unit testing framework, such as JUnit, Mockito, JMock, etc.
Generally speaking, you don't want to use a servlet container when you do unit testing because you want to focus your test on the actual method being tested, having jetty in the way means that you're also testing jetty behavior. After you've done all your unit tests you can move on to integration tests and system tests, and that part can involve external systems such as jetty (using automation frameworks such as Selenium.)
I use Mockito and PowerMock to do my unit testing, you can check out this code for a working example of a real online service (which you can find here).
I wrote a tutorial about this service and what it contains, this can be found here.
[Added after getting downvotes from time to time on this answer]: And at the risk of getting even more downvotes, all you downvoters need to read the definition of UNIT TESTING before you click the -1 button. You just don't know what you're talking about.

Inconsistent unit tests - failing in test suite, passing separated

I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');

Should you display what's happening in the unit test as it runs?

As I am coding my unit tests, I tend to find that I insert the following lines:
Console.WriteLine("Starting InteropApplication, with runInBackground set to true...");
try
{
InteropApplication application = new InteropApplication(true);
application.Start();
Console.WriteLine("Application started correctly");
}
catch(Exception e)
{
Assert.Fail(string.Format("InteropApplication failed to start: {0}", e.ToString()));
}
//test code continues ...
All of my tests are pretty much the same thing. They are displaying information as to why they failed, or they are displaying information about what they are doing. I haven't had any formal methods of how unit tests should be coded. Should they be displaying information as to what they are doing? Or should the tests be silent and not display any information at all as to what they are doing, and only display failure messages?
NOTE: The language is C#, but I don't care about a language specific answer.
I'm not sure why you would do that - if your unit test is named well, you already know what it's doing. If it fails, you know what test failed (and what assert failed). If it didn't fail you know that it succeeded.
This seems completely subjective, but to me this seems like completely redundant information that just adds noise.
I personally would recommend that you output only errors and a summary of the number of tests run and how many passed. This is a completely subjective view though. Display what suits your needs.
I recommend against it - I think that the unit testing should work on the Unix tools philosophy - don't say anything when things are going well.
I find that constructing tests to give meaningful information when they fail is best - that way you get nice short output when things work and it's easy to see what went wrong when there are problems - errors aren't lost to scroll blindness.
I would actually suggest against it (though not militantly). It couples the user interface of your tests with the test implementation (what if the tests are run through GUI viewer?). As alternative I would suggest one of the following:
I'm not familiar with NUnit, but PyUnit allows you to add a description of the test and when tests are run with the verbose option the description is printed. I would look into the NUnit documentation to see if this is something you can do.
Extend the TestCase class that you're inheriting from to add a function from which you call that logs what the test is trying to do. That way different implementations can handle messages in different ways.
I'd say you should output whatever suits your needs, but showing too much can dilute output from test runner.
BTW, your example code hardly looks as a unit test, more of a integration/system test.
I like to buffer the verbose log (about last 20 lines or so), but I don't display it until it gets to some error. When the error happens, it's nice to have some context.
OTOH, unit tests should be small pieces of unrelated code with specific input and output requirements. In most cases, displaying input that caused the error (i.e. wrong output) is enough to trace the problem to its roots.
This might be a bit too language specific, but when I'm writing NUnit tests I tend to do this, only I use the System.Diagnostics.Trace library instead of the console, that way the information is only shown if I decide to watch the tracing.
You don't need to, if the tests are running silently then that means there was no error. There is usually no reason for tests to give any output other than if the test failed. If it's running, then it is running indicated by the test runner that the test has passed, i.e. it is "green". Running the test (together with many tests with console output) through a test runner in an IDE, you'll be spamming the console log with messages nobody will care about.
The test you've written is not a unit test, but looks more like an integration/system test because you seem to be running an application as a whole. A unit test will test a public method in a class, preferably keeping the class as isolated as possible.
Using console i/o kinda defies the whole purpose of a unit testing framework. you might as well code the whole test manually. If you are using a unit testing framework, your tests should be very malleable, tied to as few things as possible
Displaying information can be useful; if you're trying to find out why a test failed, it can be useful to be able to see more than just a stack trace, and what happened before the program reached the point where it failed.
However, in the "normal" case where everything succeeds, these messages are unnecessary clutter that distract from what you're really trying to do - ie. looking at an overview of which tests succeeded and failed.
I'd suggest redirecting your debugging messages to a log file. You can either do this by writing all your log message code to call a special "log print" function, or if you're writing a console program, you should be able to redirect stdout to a different file (I know for a fact that you can do this in both Unix and Windows). This way, you get the high level overview but the details are there if you need them.
I would avoid putting extra Try/Catch statements in Unit Tests. First of all, an expected exception in a unit test will already cause the test to Fail. That is the default behavior of NUnit. Essentitally, the test harness wraps each call to your test functions with that code already. Also, by just using the e.ToString() to display what happened, I believe you are losing a lot of information. By default, I believe NUnit will display not just the Exception type, but also the Call Stack, which I don't believe you're seeing with your method.
Secondly, there are times when its necessary. For instance, you can use the [ExpectedException] attribute to actually say when it occurs. Just be sure that when you test non-exception related Asserts (for instance Asserting a list count > 0, etc) that you put in a good description as the argument to the assert. That is useful.
Everything else is generally not needed. If your unit tests are so large that you start putting in WriteLines with what "step" of the test you're on, that is generally a sign that your test should really be broken out into multiple smaller tests. In other words, that you're not doing a unit test, but rather an integration test.
Have you looked at the xUnit style of unit test frameworks?
See Ron Jeffries site for a rather large list.
One of the principles of these frameworks is that they produce little or no output during the test run and only really an indicator of success at the end. In the case of failures its possible to get a more descriptive output of the reason for failure.
The reason for this mode is that while everything is OK you don't want to be bothered by extra output, and certainly if there is a failure you don't want to miss it because of the noise of other output.
Well, you should only know when a test failed and why it failed. It's no use to know what's going on, unless, for example, you have a loop and you want to know exactly where in the loop the test died.
I think your making far more work for yourself. The tests either pass or fail, the failure should hopefully be the exception to the rule and you should let the unit test runner handle and throw the exception. What you're doing is adding cruft, the exception logged by the test runner will tell you the same thing.
The only time I would display what's happening is if there was some aspect of it that would be easier to test non-automatically. For example, if you've got code that takes a little while to run, and might get stuck in an infinite loop, you might want to print out a message every so often to indicate that it is still making progress.
Always make sure failure messages clearly stand out from other output, however.
You could have written the test method like this. It's up to your code-nose which style of test you prefer. I prefer not writing extra try-catches and Console.WriteLines.
public void TestApplicationStart()
{
InteropApplication application = new InteropApplication(true);
application.Start();
}
Test frameworks that I have worked with would interpret any unhandled (and unexpected) exception as a failed test.
Think about the time you took to gold-plate this test and how many more meaningful tests you could have written with that time.