I am writing a browser extension for Firefox and Chrome. I use the browser.* commands and Mozilla's 'webextension-polyfill' module to make browser work in chrome. I have a file called browser.ts that contains a single line:
export const browser = require('webextension-polyfill');
which then gets used in each file like so
import {browser} from '../path/to/browser.ts'.
I want to make it so I can write unit tests, but the line require('webextension-polyfill') is causing me a headache as any test with browser throws This script should only be loaded in a browser extension and references the previous require statement.
I have tried using rewire, proxyquire, and jest mocks to avoid the require statement from being called in the unit tests, but I am not able to override it successfully. The only way I have been able to avoid this error is with a try-catch and returning a mock object on the exception, but that seems hacky and messy.
What is the best way to get around this and fix it? I see packages like mockzilla which look helpful for mocking, but I think I would use them after I can first use the require line for webextension-polyfill.
Related
Does jest automatically restore mocked modules between test files? For example, if I call jest.mock('some_module') in one file, do I need to ensure I call jest.unmock('some_module') after all the tests are run in that file?
It's not clear to me whether that happens in the documentation.
You don't have to reset the mocks, as the test are run in parallel, every test file run in its own sandboxed thread. Even mocking JavaScript globals like Date or Math.random only affects the actual test file.
The only problem we had so far was mocking process.env.NODE_ENV which affected other test that run at the same time. But reseting this after the test run solved the problem.
I'm trying to write unit tests on a piece of code that imports dart:html and I ended up having a test class that's using useHtmlConfiguration();
Do I really have to do this? Because it seems everytime I run my tests, it runs in a browser, dart2js gets called and it's taking much longer than if I were testing with the dartVM. I tried it with Dartium and it also recompiles.
In fact, the only reason my code uses dart:html is because it's using HttpRequest in the package. At the end I might just end up putting an interface in front of the class doing the http request and mocking it, but I was wondering if there were an efficient way of having a good (read quick) feedback loop without having to call dart2js everytime I want to run my tests?
If your code imports dart:html the code and also tests importing this code can only be run in the browser.
I don't know why dart2js is called. You can run tests in Dartium or content_shell --dump-render-tree (headless Dartium) as Dart code without transpiling to JS first.
You might prefer using the http package which has some abstraction of HttpRequest which should on client and server (haven't tested it myself this way yet).
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
I am unit testing a StateMachineWorkflow and I create my test methods by clicking in my test project and I make Add - UnitTest. In the project window I select the workflow that I want to test and all the methods in it.
Visual Studio generated a Test Reference folder in my Test Project with an accessor to the workflow. It also generated all the TestMethod() necessary for the testing. All test Methods use a MyWorkflow_Accessor target = new MyWorkflow_Accessor(). When I need to call a function I just do something like target.SendEmail().
Everything works fine, except for one thing: I can't use WorkflowInstanceId of the Workflow, when the code reach a line that uses this it throws an exception in the Workflow, "This is an invalid design time operation. You can only perform the operation at runtime."
Is it possible to inject the WorkflowID by code? Is there any workaround to this situation? I use the WorkflowInstanceId in a lot of functions and changing the Workflow code to match my test doesn't seem like a good idea because I believe the problem is in the test and not in the workflow.
It's not clear from your question if you're using WF 3.5 or WF4 with the state machine update. For the latter, you can use Microsoft.Activities.UnitTesting to test workflows.
It sounds like you're using WF 3.5, though. If this is new development, I would seriously consider moving to WF4. Microsoft basically rewrote WF, and the sooner you switch, the easier your migration path will be.
Otherwise, there is some information on testing with WF 3.5 on MSDN.
I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');