I have been working on creating unit tests which run local versions of a workflow. I followed this guide for an initial setup. With that setup, I have been able to successfully execute and test a workflow. The issue arises when I attempt to unit test an activity implementation that is marked as #ManualActivityCompletion. It appears that the manual completion activities return normally within unit tests (not waiting for a completion/failure call).
I'm wondering if it is even possible to unit tests manual completion activities in this way. My guess is that it is not since I have seen no mention of it and I don't see any way to create a test ManualActivityCompletionClient. In that case, I'm wondering if anyone has any suggestions on how to unit test manual completion activities in a local workflow. I have attempted to create workarounds to this by using different threads and synch points, but it is useful to test with the actual behavior of completing/failing activities (exceptions that are thrown, etc.). It may be worth mentioning that I have been able to write successful integration unit tests for manual completion activities.
Any help is greatly appreciated.
To test workflow logic that invokes activity marked with #ManualActivityCompletion just mock client side interface of this activity directly. As client executes in the asynchronous context of the workflow you can use Promises and WorkflowClock to implement your tests.
Related
I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation
I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');
I am developing unit test cases for an application using Boost.test libraries. There are certain APIs which can directly be tested.
But, there are APIs which require interaction between test machines. So for example, execution of a certain API in machine 1 should trigger an API in test machine 2 and its response needs to be used again in machine 1 for successful completion.
How can I synchronize this ? Does Boost provide other libraries for this interaction? If there are any other approaches, kindly suggest them.
Thanks in advance for your time and help.
There are two kinds of tests you can write for this interaction:
Unit test - using mocks/faks you can fake the calls from the first component and fake the calls from the 2nd component back. This way you can test the internal logic of the first component - for example make sure that if no response were returned a time-out exception is raised.
Integration/acceptance test - create both components as part of the test and configure them and raise the call from component one.
In both kinds of tests you might be required to use events and WaitForSingleObject to make sure that the test won't end before the response has returned.