I have two different tests, and both are failing when run with other tests. I'm going to display one of them here. This test is for testing that synonyms are working. I've got the following synonyms in my synonym.txt file:
knife, machete
bayonet, dagger, sword
the unit test looks like this:
def test_synonyms(self):
"""
Test that synonyms are working
"""
user = UserFactory()
SubscriberFactory.create(user=user)
descriptions = [
'bayonet',
'dagger',
'sword',
'knife',
'machete'
]
for desc in descriptions:
ListingFactory.create(user=user,
description="Great {0} for all of your undertakings".format(desc))
call_command('update_index', settings.LISTING_INDEX, using=[settings.LISTING_INDEX])
self.sqs = SearchQuerySet().using(settings.LISTING_INDEX)
self.assertEqual(self.sqs.count(), 5)
# 3 of the 5 are in one group, 2 in the other
self.assertEqual(self.sqs.auto_query('bayonet').count(), 3)
self.assertEqual(self.sqs.auto_query('dagger').count(), 3)
self.assertEqual(self.sqs.auto_query('sword').count(), 3)
# 2 of the 5 in this group
self.assertEqual(self.sqs.auto_query('knife').count(), 2)
self.assertEqual(self.sqs.auto_query('machete').count(), 2)
The problem is that when I run the test in isolation with the command ./manage.py test AnalyzersTestCase.test_synonyms it works fine. But if I run it along with other tests, it fails, returning 1 result where it should return 3. If I run a raw elasticsearch query at that point, elasticsearch returns 1 result. So it must be something in the setup of the index... but I'm deleting the index in the setup() method, so I don't see how it can be in a different state when run in isolation from when it's run alongside other tests.
Any help you can give would be great.
Figured it out...
Haystack's connections singleton needed to be cleared between tests, so:
import haystack
for key, opts in haystack.connections.connections_info.items():
haystack.connections.reload(key)
call_command('clear_index', interactive=False, verbosity=0)
Related
I have a need for a uniqueID within my Django code. I wrote a simple model like this
class UniqueIDGenerator(models.Model):
nextID = models.PositiveIntegerField(blank=False)
#classmethod
def getNextID(self):
if(self.objects.filter(id=1).exists()):
idValue = self.objects.get(id=1).nextID
idValue += 1
self.objects.filter(id=1).update(nextID=idValue)
return idValue
tempObj = self(nextID=1)
tempObj.save()
return tempObj.nextID
Then I wrote a unit test like this:
class ModelWorking(TestCase):
def setUp(self):
return None
def test_IDGenerator(self):
returnValue = UniqueIDGenerator.getNextID()
self.assertEqual(returnValue, 1)
returnValue = UniqueIDGenerator.getNextID()
self.assertEqual(returnValue, 2)
return None
When I run this test by itself, it runs fine. No issues.
When I run this test as a suite, which includes a bunch of other unit tests as well (which include calls to getNextID() as well), this test fails. The getNextID() always returns 1. Why would that be happening?
I figured it out.
Django runs each test in a transaction to provide isolation. Doc link.
Since my other tests make a call to getNextID(), the first row gets deleted after the first test that makes such a call is complete. Subsequent tests never find (id=1), due to which all subsequent calls return the value 1.
Even though I don't think I would face that situation in production, I went I ahead and changed my code to use .first() instead of (id=1). Like this
def getNextID(self):
firstRow = self.objects.first()
if(firstRow):
That way I believe it would better handle any future scenario when the database table might be emptied.
I am trying to run a TestCase on my model.
I already have a MySQL database (specifically MariaDB through a HeidiSQL GUI) created and connected with the respective data inside for this project.
My test.py code is as follows:
class TestArrivalProbabilities(TestCase):
def test_get_queryset_test(self):
print("Hello Steve!")
i = 1
self.assertEqual(i, 1)
l = [3, 4]
self.assertIn(4, l)
def test_get_queryset_again(self):
query_set = ArrivalProbabilities.objects.all()
print(query_set)
n = len(query_set)
print(n) # Print each row
bin_entries = []
bin_edges = []
for i in range(n):
print(query_set[i])
if query_set[i].binEntry is not None:
bin_entries.append(query_set[i].binEntry)
bin_edges.append(query_set[i].binEdge)
print(bin_entries, bin_edges)
hist = (numpy.array(bin_entries), numpy.array(bin_edges))
However, the output in the terminal is this:
(venv) C:\Users\Steve\uni-final-project>python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
<QuerySet []>
0
[] []
.Hello Steve!
.
----------------------------------------------------------------------
Ran 2 tests in 0.016s
OK
Destroying test database for alias 'default'...
I have tried to figure out why the MySQL database I built isn't being used. I read up that Django creates a Test 'dummy database' to use on a test and then tears it down after but am I missing something really obvious?
I don't think it is a connection issue as I have pip installed mysqlclient. And I have tried to use the https://docs.djangoproject.com/en/3.2/topics/db/queries/#creating-objects but I still get the same result.
I have read the documentation but I am struggling with certain aspects of it as I am new to software development and this course is quite a steep learning curve.
I checked to see if this question wasn't asked before but I couldn't see any answers to it. Apologies in advance if it has been answered somewhere.
Any help in the right direction or a solution is much appreciated.
Thanks.
First of all you should read something related to testing not django's creating-objects. When you want to test your code usually there are some approaches regarding database. In official django documentation its shows how to deal with database instances. Simply when you start your test it creates a test database and runs your testcases. For that you can use a setUp function
add this and try again please:
class TestArrivalProbabilities(TestCase):
def setUp(self):
....
ArrivalProbabilities.objects.create(...)
ArrivalProbabilities.objects.create(...)
....
after this part you no longer will see empty queryset.
You can also use fixtures
How do forcibly skip a unit test in Django?
#skipif and #skipunless is all I found, but I just want to skip a test right now for debugging purposes while I get a few things straightened out.
Python's unittest module has a few decorators:
There is plain old #skip:
from unittest import skip
#skip("Don't want to test")
def test_something():
...
If you can't use #skip for some reason, #skipIf should work. Just trick it to always skip with the argument True:
#skipIf(True, "I don't want to run this test yet")
def test_something():
...
unittest docs
Docs on skipping tests
If you are looking to simply not run certain test files, the best way is probably to use fab or other tool and run particular tests.
Django 1.10 allows use of tags for unit tests. You can then use the --exclude-tag=tag_name flag to exclude certain tags:
from django.test import tag
class SampleTestCase(TestCase):
#tag('fast')
def test_fast(self):
...
#tag('slow')
def test_slow(self):
...
#tag('slow', 'core')
def test_slow_but_core(self):
...
In the above example, to exclude your tests with the "slow" tag you would run:
$ ./manage.py test --exclude-tag=slow
I'm writing unit tests for a celery task using django-nose. It's fairly typical; a blank test database (REUSE_DB=0) that is pre-populated via a fixture at test time.
The problem I have is that even though the TestCase is loading the fixture and I can access the objects from the test method, the same query fails when executed within an async celery task.
I've checked that the settings.DATABASES["default"]["name"] are the same both in the test method and the task under test. I've also validated the that the task that's under test behaves correctly when invoked as a regular method call.
And that's about where I'm out of ideas.
Here's a sample:
class MyTest(TestCase):
fixtures = ['test_data.json']
def setUp(self):
settings.CELERY_ALWAYS_EAGER = True # seems to be required; if not I get socket errors for Rabbit
settings.CELERY_EAGER_PROPAGATES_EXCEPTIONS = True # exposes errors in the code under test.
def test_city(self):
self.assertIsNotNone(City.objects.get(name='brisbane'))
myTask.delay(city_name='brisbane').get()
# The following works fine: myTask('brisbane')
from celery.task import task
#task()
def myTask(city_name):
c = City.objects.count() # gives 0
my_city = City.objects.get(name=city_name) # raises DoesNotExist exception
return
This sounds a lot like a bug in django-celery 2.5 which was fixed in 2.5.2: https://github.com/celery/django-celery/pull/116
The brief description of the bug is that the django-celery loader was closing the DB connection prior to executing the task even eager tasks. Since the tests run inside a transaction the new connection for the task execution can't see the data created in the setUp.
I am running Selenium unittests after my hudson build. And want to use it for monitoring my websites functionality.
When the build succeeds (which should be always, since it only contains the unittests), hudson will not send emails, even when some tests fail.
With email-ext, I could send emails when it becomes unstable, but since browser unittests are somewhat flaky, I do not want them at the first failure, more something like 3 in a row or 80% of the last x-Minutes/runs
Best would be a configuration based on a ruleset based on the testname or something defined in the test which marks it as relevant..
What about using a script for setting the mail content in case of unstable/still-unstable builds alone?
Here, you can add some if conditions for testing the age for the required test cases alone.
<% if(build.testResultAction) {
def rootUrl = hudson.model.Hudson.instance.rootUrl
def jobName = build.parent.name
def previousFailedTestCases = new HashSet()
def currentFailedTestCase = new HashSet()
if(build.previousBuild?.testResultAction){
build.previousBuild.testResultAction.failedTests.each {
previousFailedTestCases << it.simpleName +"." + it.safeName
}
}
testResult.failedTests.each{tr ->
def packageName = tr.packageName
def className = tr.simpleName
def testName = tr.safeName
def displayName = className+"."+testName
currentFailedTestCase << displayName
def url = "$HUDSON_URL/job/$PROJECT_NAME/$BUILD_NUMBER/testReport/$packageName/$className/$testName"
if(tr.age == 1){
startedFailing << [displayName:displayName,url:url,age:1]
} else{
failing << [displayName:displayName,url:url,age:tr.age]
}
}
startedPassing = previousFailedTestCases - currentFailedTestCase
startedFailing = startedFailing.sort {it.displayName}
failing = failing.sort {it.displayName}
startedPassing = startedPassing.sort()
} %>
Source link : http://techkriti.wordpress.com/2008/08/30/using-groovy-with-hudson-to-send-rich-text-email/
When the build succeeds (which should be always, since it only contains the unittests), hudson will not send emails, even when some tests fail.
I don't know if this is something you want to fix, but if you use the argument
-Dmaven.test.failure.ignore=false
Then Hudson will fail your build if a test fails.
With email-ext, I could send emails when it becomes unstable, but since browser unittests are somewhat flaky, I do not want them at the first failure, more something like 3 in a row or 80% of the last x-Minutes/runs
Your unit tests are minutes/runs? Is this more a performance test than a Unit Test? If it's less a Unit Test and more a Performance / Load Test, we've used JMeter (Hudson has a plugin, as does Maven) with great effect, which allows us to set % when to set the build as unstable or failed.
It sounds like you need two jobs in hudson. One for unittests and one for selenium.
You want the first job to build and run the unittests and have hudson report on the unittests.
In the configuration under "post build actions" you can add a "project to build" and specify your job that builds and runs selenium and reports on those results.
This way you can tweak the thresholds for emails for unit tests to be far more strict than your selenium results.