I want to delete every new entry made with the tests in order to make it possible to run it again (both on the CI build or manually) without having to manually delete the database entries made with the previous run of the tests. I have found the tearDown() and tearDownAfterClass() but it seems to be useful only for connecting/disconnecting the link with the database. Can I use it to delete the entries made with the tests?
I want to delete every new entry made with the tests in order to make it possible to run it again (both on the CI build or manually) without having to manually delete the database entries made with the previous run of the tests.
In set-up create an online snapshot of the database and in tear-down reset it to the snapshot point.
Whatever your tests have done to the data in the database or to the structure of it, it is not part of the snapshot which ran before the test and therefore gone.
Please consult your DBA about the options for the database system in use and you might want to bring in a system administrator as well, as it might be possible this is most easily done with a snapshot feature of the file-system and you would be interested to integrate this all with your test-suite.
Yes, you can do it via the methods you mentioned.
tearDown method is enabled after every single test inside the class
tearDownAfterClass method is enabled at the end of all tests inside the class
I would suggest you use a rollback pattern, so
in the setUp method you would initialize the transaction
protected function setUp(): void
{
//...some boilerplate with establish connection
$this->connection->beginTransaction();
}
in tearDown method you would rolling back changes made in tests
protected function tearDown(): void
{
$this->connection->rollback();
}
Related
I am running some tests in django but they depend on a response from an outside service. For instance,
I might create a customer and wish to acknowledge it has been created in the outside service.
Once I am done with testing I want to remove any test customers from the outside service.
Ideally, there would be a method similar to setUp() that runs after all tests have completed.
Does anything like this exist?
You can make use of either unittest.TestCase.tearDown or unittest.TestCase.tearDownClass
tearDown(...) is the method gets called immediately after the test method has been called and the result recorded.
but, the tearDownClass(...) is gets called after tests in an individual class have run. That is, once per test class.
IMO, using tearDownClass(...) method is more appropriate since you may not need to check/acknoledge the external service after search test cases of the same class
So Django's testing framework uses a Python standard library module, unittest. This is where the setUp() method comes from.
This library contains another method tearDown() that is called immediately after the tests are run. More info can be found here
Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.
We are in the situation when the database used as our test environment db must be kept clean. It means that every test has a cleanup method which will be run after each execution at it deletes from the db every data which needed for the test.
We use Specflow and to achieve our goal to keep the db clean is reachable by using this tool if the test execution is not halted. But, during developing the test cases happens that the test execution is halted so the generated data in the db is not cleaned up.
The question came up what happens when I press the "Halt execution" in VS 2013? How the VS stops the execution? What method will be called? It is possible to customize it?
The specflow uses MSTest framework and there is no option to change it.
I don't know how practical this is going to be for you, but as I see it you have a couple of options:
Run the cleanup code at the start and end of the test
Create a new database for every test
The first is the simplest and will ensure that when you stop execution in VS it won't impact the next test run (of the same test) as any remaining data will be cleaned up when the test runs.
The second is more complicated to set up (and slower when it runs) but means that you can run your tests in parallel (so is good if you use tools like NCrunch), and they won't interfere with each other.
What I have done ion the past is make the DB layer switchable so you can run the tests against in memory data most of the time, and then switch to the DB once in a while to check that the actual reading and writing stuff isn't broken
This isn't too onerous if you use EF6 and can switch the IDBSet<T> for some other implementation backed by an in memory IQueryable<> implementation
I have created a test project for Windows Store App. There is a method CreateDB which creates a SQLite database. In my test project I have written a test method which calls the CreateDB method and checks if the database is created.
When I execute this test method everything goes well, but as soon as the test execution ends the Local Storage gets deleted.
How do I prevent this?
I'm pretty sure the test framework is designed to create and remove the data
What I usually do in this case is serialize out the object or just convert it to a byte array, then I put a break point just before the end of the method, debug, break there, and copy the value out to a file.
In order to launch a test based on a fake application using memory database, Play Framework 2 promotes doing this way:
FakeApplication(additionalConfiguration = inMemoryDatabase())
Is it a necessity to precise additionalConfiguration = inMemoryDatabase() if the application.conf already declares a memory database (h2) dedicated for tests?
I guess this additional configuration forces to redeclare a clean memory database for each fake application, rather than using the same for all suite test. Thus involving a full isolation for each one and avoid us to redefine setUp() and tearDown() method to manage it.
What is useful for?