I'm trying to write unit tests on a piece of code that imports dart:html and I ended up having a test class that's using useHtmlConfiguration();
Do I really have to do this? Because it seems everytime I run my tests, it runs in a browser, dart2js gets called and it's taking much longer than if I were testing with the dartVM. I tried it with Dartium and it also recompiles.
In fact, the only reason my code uses dart:html is because it's using HttpRequest in the package. At the end I might just end up putting an interface in front of the class doing the http request and mocking it, but I was wondering if there were an efficient way of having a good (read quick) feedback loop without having to call dart2js everytime I want to run my tests?
If your code imports dart:html the code and also tests importing this code can only be run in the browser.
I don't know why dart2js is called. You can run tests in Dartium or content_shell --dump-render-tree (headless Dartium) as Dart code without transpiling to JS first.
You might prefer using the http package which has some abstraction of HttpRequest which should on client and server (haven't tested it myself this way yet).
Related
I am writing a browser extension for Firefox and Chrome. I use the browser.* commands and Mozilla's 'webextension-polyfill' module to make browser work in chrome. I have a file called browser.ts that contains a single line:
export const browser = require('webextension-polyfill');
which then gets used in each file like so
import {browser} from '../path/to/browser.ts'.
I want to make it so I can write unit tests, but the line require('webextension-polyfill') is causing me a headache as any test with browser throws This script should only be loaded in a browser extension and references the previous require statement.
I have tried using rewire, proxyquire, and jest mocks to avoid the require statement from being called in the unit tests, but I am not able to override it successfully. The only way I have been able to avoid this error is with a try-catch and returning a mock object on the exception, but that seems hacky and messy.
What is the best way to get around this and fix it? I see packages like mockzilla which look helpful for mocking, but I think I would use them after I can first use the require line for webextension-polyfill.
I realize it may sound like an odd request, and it certainly will not do wonders for test performance, but it's critical that I get a new AppDomain for the start of each unit test.
Currently I'm using xUnit and Resharper as the test runner. But I'm willing to change if there's a different framework that would yield the behaviour that I need.
The xunit resharper runner doesn't have this kind of functionality, and I don't know any test framework that does this out of the box. If you need each test to run in a new AppDomain, I'd write it so that each test created a new AppDomain and ran some custom code in there.
You could probably use some of xunit's features to make this a little easier - the BeforeAfterTestAttribute allows you to run code before and after, or you could pass in a fixture that provides functionality to setup/teardown the AppDomain.
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
How can I tell JUnit to skip certain lines of source code?
Context: I'm programming a WebService which uses the weblogic.logging.LoggingHelper class to create log entries.
Calls to this class are only useful, if the code runs on a weblogic server. But I want to test the code locally, without having to uncomment the logging statements for debugging all the time.
In order to avoid calling the LoggingHelper, you should use mocking framework like mockito where in you can mock the weblogic.logging.LoggingHelper class and avoid calling the real method.
LoggingHelper lh = Mockito.mock(LoggingHelper.class);
when(lh.log(anyString()).thenReturn(...);
Here is the link to the framrwork
https://code.google.com/p/mockito/
No you cannot. You have to either use a mocking framework like ashoka suggested or you have to rewrite your production code such that you can easily exchange the LoggingHelper.
I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');