I am testing my graphql endpoint that is doing query to DB through sequelize with Jest. But sometimes I found my test files kind of having a race condition. What I mean is, for every test file I have a setup DB block such as
beforeAll(async () => {
await database.sequelize.sync({force: true})
await database.User.create(user)
})
so that in the test file I can easily predict what is in DB. Sometimes this causes an issue where every test file trying to do .sync(). One test file will create DB while the other performs drop DB.
Even though I have used await through out my tests, this doesn't look like guarantee that the test file will wait for another test to finish.
What would be the best approach here to make sure every test file can predict what is in DB by having clean DB while at the same time not conflicting with other tests? Is it actually a good idea for a test to wait for others to finish? It seems not optimised to me as it will take more time to run the whole test suite.
How many workers are you using? Tests run serially within the same test file, but not between different test files.
There are a number of options to try:
1 - Creating and using a unique DB for every test file.
2 - Making them all run serially with the --runInBand flag. I'm not sure but maybe --maxWorkers=1 does it as well.
Related
Why when creating instance as a class attribute:
class ModelTestsProducts(TestCase):
# Create example objects in the database
product_category = models.Product_category.objects.create(
name='Spring'
)
product_segment = models.Product_segment.objects.create(
name='PS1',
product_category=self.product_category
)
product_group = models.Product_group.objects.create(
name='PG1',
product_segment=self.product_segment
)
def test_product_category(self):
self.assertEqual(str(self.product_category), self.product_category.name)
def test_product_segment(self):
self.assertEqual(str(self.product_segment), self.product_segment.name)
def test_product_group(self):
self.assertEqual(str(self.product_group), self.product_group.name)
I am getting following error when running test for the 2nd time?
django.db.utils.IntegrityError: duplicate key value violates unique constraint "products_product_category_name_key"
DETAIL: Key (name)=(dadsad) already exists.
When I use setUp method and then create objects insite this setUp method it works fine, but I cant understand why the above method somehow creates every object multiple times and thus fails the unique constraint set in the model.
Is it because django test suite somehow calls this class everytime every test function is run, thus every attribute is assigned multiple times?
But then if I move the object assignment outside the class (in the test file) then I also get this duplicate error, so that would mean whole test file is being called multiple times every time test is being run.
One more thing I use docker to run this Django app and and run django test from docker-compose command.
If you're using Django's TestCase, Django will run each test in a separate transaction, that will be rolled back after the test is finished, which means there will be no changes in your database and everything you'll try to run, will only exist inside your test. This is done to make sure your tests are not affecting each other.
setUp function is also executed inside this transaction and it's invoked before every test in your test class. But everything you run outside of that function, in your case in the class body, will not be wrapped in such transaction, so it won't be rolled back between your tests. If this code is reached twice (which may be done under the hood by the test runner), your code will try to create some data in the database that already exists and it will fail.
If you want to do some optimizations of how your tests are being executed, you may use setUpTestData, so your test data is initialized only once for all tests in a single test class. It'll be wrapped in an outer-shell transaction and will be rolled back after all tests from such test case are done.
I am having an issue with tests sporadically failing when using nunit 3 and using parallel test running.
We have a number of tests that currently are structured as follows
[TestFixture]
public class CalculateShipFromStoreShippingCost
{
private IService_service;
private IClient _client;
[SetUp]
public void SetUp()
{
_service = Substitute.For<IService>();
_client = new Client(_service);
}
[Test]
public async Task WhenScenario1()
{
_service.Apply(Args.Any<int>).Returns(1);
var result = _client.DoTheThing();
Assert.IsTrue(1,result);
}
[Test]
public async Task WhenScenario2()
{
_service.Apply(Args.Any<int>).Returns(2);
var result = _client.DoTheThing();
Assert.IsTrue(2,result);
}
}
Sometimes the test fail as one of the substitutes is returning the value for the other test.
How should this test be structured so that with Nunit it will run reliably when done in parallel
You haven't shown any Parallelizable attributes in your example, so I assume you are using the attribute at a higher level, most likely on the assembly. Otherwise, no parallel execution would occur. Further, since you say the test cases are running in parallel, you have apparently specified ParallelScope.Children.
The two test cases shown in your fixture cannot run in parallel. You should bear in mind that the SetUp method runs for each of the tests. So each of your two tests sets the value of _service, which is part of the state of the single instance of CalculateShipFromStoreShippingCost, which is shared by both tests. That is why you are seeing the "wrong" substitute being returned at times.
It is not possible for two test cases to run reliably in parallel if they both change the state of the fixture. Note that it does not matter whether the assignment to _service takes place in the test method itself or in the SetUp method - both are executed as part of the test case. So, you have to either stop running these two cases in parallel or stop changing the state.
To stop running the tests in parallel, you simply add [NonParallelizable] to each test method. If you are not using the latest framework version, use [Parallelizable(ParallelScope.None)] instead. Your other tests will continue to run in parallel, but these two will not.
Alternatively, use ParallelScope.Fixture at the assembly level. This will cause fixtures to run in parallel by default, while the individual test cases within them each run sequentially. When using ParallelizableAttribute at the assembly level, it is sometimes best to take a more conservative approach, adding in more parallelism within some fixtures where it is useful.
An entirely different approach is to make your tests stateless. Eliminate the _service member and use a local value within the test method itself. Each of your tests would add two lines like...
var service = SubstituteFor<IService>();
var client = new Client(service);
As shown in your example, I would imagine you are getting very little performance gain from running the two methods in parallel, so I would not use that last approach unless I saw a specific performance reason to do so.
As a final note... If you use make your fixtures run in parallel by default (either with an assembly-level attribute or with attributes on each fixture) and place no Parallelizable attribute on your test cases, NUnit uses an optimization, whereby all the tests within the fixture run on the same thread. This saving in context changes will often make up for the loss of any performance improvement you hoped to get through running in parallel.
Has someone worked on unit testing ag-grid components in Angular 2?
For me, this.gridOptions.api remains undefined when the test cases run.
Sorry to be a little late to the party but I was looking for an answer to this just a couple of days ago so wanted to leave an answer for anyone else that ends up here. As mentioned by Minh above, the modern equivalent of $digest does need to be run in order for ag-grid api's to be available.
This is because after the onGridReady() has run you have access to the api's via the parameter, looking like so. This is run automatically when a component with a grid is initialising. Providing it is defined in the grid (gridReady)="onGridReady($event)"
public onGridReady(params)
{
this.gridOptions = params;
}
This now means you could access this.gridOptions.api and it would be defined, you need to re-create this in your test by running detectChanges(). Here is how I got it working for my project.
fixture = TestBed.createComponent(TestComponent);
component = fixture.componentInstance;
fixture.detectChanges(); // This will ensure the onGridReady(); is called
This should inturn result in .api being defined when running tests. This was Angular 6.
Occasionally the test may have to perform an await or a tick:
it('should test the grid', fakeAsync( async () => {
// also try tick(ms) if a lot of data is being tested
// try to keep test rows as short as possible
// this line seems essential at times for onGridReady to be processed.
await fixture.whenStable();
// perform your expects...after the await
}));
If you are using ag-grid enterprise make sure to include in your test file import 'ag-grid-enterprise'; otherwise you will see console errors and gridReady will never be called:
Row Model "Server Side" not found. Please ensure the ag-Grid Enterprise Module #ag-grid-enterprise/server-side-row-model is registered.';
It remains undefined because the event onGridReady is not invoked yet. Im not sure about Angular 2 because im using angularjs and have to do $digest in order to invoke onGridReady.
I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.
Im having a failed integration test because of test pollution (tests pass or fail depending on which order they are run in).
What baffles me a bit however is that it seems that a unit test where i have mocked some data with mockDomain(Media.class,[new Movie(...)]) is still present and available in other tests, even integration tests.
Is this the expected behaviour? why doesn't the test framework clean up after itself for each test?
EDIT
Really strange, the documentation states that:
Integration tests differ from unit tests in that you have full access to the Grails environment within the test. Grails will use an in-memory HSQLDB database for integration tests and clear out all the data from the database in between each test.
However in my integration test i have the following code
protected void setUp() {
super.setUp()
assertEquals("TEST POLLUTION!",0,Movie.count())
...
}
Which gives me the output:
TEST POLLUTION! expected:<0> but was:<1>
Meaning that there is data present when there shouldn't be!
Looking at the data that is present int he Movie.list() i find that the data corresponds to data set in a previous test (unit test)
protected void setUp() {
super.setUp()
//mock the superclass and subclasses as instances
mockDomain(Media.class,[
new Movie(id:1,name:'testMovie')
])
...
}
Any idea's of why im experiencing these issues?
It's also possible that the pollution is in the test database. Check the DataSources.groovy to see what is being used for the test environment. If you have it set to use a database where
dbCreate is set to something other than "create-drop", any previous contents of the database could also be showing up.
If this is the case, the pollution has come from an entirely difference source. Instead of coming from the unit tests, it has actually come from the database, but when switching to run the integration tests you get connected to a real database with all the data it contains.
We experienced this problem, in that our test enviroment was set to have dbCreate as "update". Quite why this was set for integration tests puzzled me, so I switched to use dbCreate as "create-drop" and make sure that when running test suites we started with a clean database.