I have created a test project for Windows Store App. There is a method CreateDB which creates a SQLite database. In my test project I have written a test method which calls the CreateDB method and checks if the database is created.
When I execute this test method everything goes well, but as soon as the test execution ends the Local Storage gets deleted.
How do I prevent this?
I'm pretty sure the test framework is designed to create and remove the data
What I usually do in this case is serialize out the object or just convert it to a byte array, then I put a break point just before the end of the method, debug, break there, and copy the value out to a file.
Related
I want to delete every new entry made with the tests in order to make it possible to run it again (both on the CI build or manually) without having to manually delete the database entries made with the previous run of the tests. I have found the tearDown() and tearDownAfterClass() but it seems to be useful only for connecting/disconnecting the link with the database. Can I use it to delete the entries made with the tests?
I want to delete every new entry made with the tests in order to make it possible to run it again (both on the CI build or manually) without having to manually delete the database entries made with the previous run of the tests.
In set-up create an online snapshot of the database and in tear-down reset it to the snapshot point.
Whatever your tests have done to the data in the database or to the structure of it, it is not part of the snapshot which ran before the test and therefore gone.
Please consult your DBA about the options for the database system in use and you might want to bring in a system administrator as well, as it might be possible this is most easily done with a snapshot feature of the file-system and you would be interested to integrate this all with your test-suite.
Yes, you can do it via the methods you mentioned.
tearDown method is enabled after every single test inside the class
tearDownAfterClass method is enabled at the end of all tests inside the class
I would suggest you use a rollback pattern, so
in the setUp method you would initialize the transaction
protected function setUp(): void
{
//...some boilerplate with establish connection
$this->connection->beginTransaction();
}
in tearDown method you would rolling back changes made in tests
protected function tearDown(): void
{
$this->connection->rollback();
}
I am running some tests in django but they depend on a response from an outside service. For instance,
I might create a customer and wish to acknowledge it has been created in the outside service.
Once I am done with testing I want to remove any test customers from the outside service.
Ideally, there would be a method similar to setUp() that runs after all tests have completed.
Does anything like this exist?
You can make use of either unittest.TestCase.tearDown or unittest.TestCase.tearDownClass
tearDown(...) is the method gets called immediately after the test method has been called and the result recorded.
but, the tearDownClass(...) is gets called after tests in an individual class have run. That is, once per test class.
IMO, using tearDownClass(...) method is more appropriate since you may not need to check/acknoledge the external service after search test cases of the same class
So Django's testing framework uses a Python standard library module, unittest. This is where the setUp() method comes from.
This library contains another method tearDown() that is called immediately after the tests are run. More info can be found here
I am working in Visual Studio 2013, and most of my unit tests will fail if I select "Run All", but if I select a failed test and then run it by itself, it passes. Additionally, if I select some of these tests and run them, the first test to run will pass and the others will fail. However, each will pass if run alone.
I've noticed that most of the failed tests have a "System.NullReferenceException: Object reference not set to an instance of an object..", but again, this only appears if I run all tests.
I could run these tests one at a time, but I would very much like to avoid that. If anyone has encountered this problem before, how did you fix it?
Context: Running this in Visual Studios 2013 with .Net 4.6.2 installed.
Update: There is a test initializer running before every test that sets the state for the test environment. There is also a dispose method that runs after every test to clean that environment up.
Now, what I see happening is that there's one object in particular that does not exist before the first pass on the test initializer, but it does exist afterwards. In the cleanup after the first test, most other objects are deleted, but this one just becomes null. On the second (and all later passes) through the test initializer - so just before any actual test beyond the first - that object remains 'null' rather than getting a filepath like it did on the first pass.
Then, whenever any of those other tests try to call that object, they're getting a null value and throwing that exception.
Check if you are setting a class level object to null and then using it in a later tests.
Without seeing your code, I can only take a guess, so here it is.
You are initializing your objects inside the test class's constructor instead of in the setup method. This means that multiple tests are using the same objects at the same time and those objects can be any any state that the other tests put them in.
I ended up building a static method to dispose of the null object after each test. I made that method thread safe by checking if it was in use, locking it, and then checking again to see if it is in use.
Now, whenever the testInitializer is run again, that object gets created and pointed towards the proper path, rather than simply remaining null.
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation