I am running Selenium unittests after my hudson build. And want to use it for monitoring my websites functionality.
When the build succeeds (which should be always, since it only contains the unittests), hudson will not send emails, even when some tests fail.
With email-ext, I could send emails when it becomes unstable, but since browser unittests are somewhat flaky, I do not want them at the first failure, more something like 3 in a row or 80% of the last x-Minutes/runs
Best would be a configuration based on a ruleset based on the testname or something defined in the test which marks it as relevant..
What about using a script for setting the mail content in case of unstable/still-unstable builds alone?
Here, you can add some if conditions for testing the age for the required test cases alone.
<% if(build.testResultAction) {
def rootUrl = hudson.model.Hudson.instance.rootUrl
def jobName = build.parent.name
def previousFailedTestCases = new HashSet()
def currentFailedTestCase = new HashSet()
if(build.previousBuild?.testResultAction){
build.previousBuild.testResultAction.failedTests.each {
previousFailedTestCases << it.simpleName +"." + it.safeName
}
}
testResult.failedTests.each{tr ->
def packageName = tr.packageName
def className = tr.simpleName
def testName = tr.safeName
def displayName = className+"."+testName
currentFailedTestCase << displayName
def url = "$HUDSON_URL/job/$PROJECT_NAME/$BUILD_NUMBER/testReport/$packageName/$className/$testName"
if(tr.age == 1){
startedFailing << [displayName:displayName,url:url,age:1]
} else{
failing << [displayName:displayName,url:url,age:tr.age]
}
}
startedPassing = previousFailedTestCases - currentFailedTestCase
startedFailing = startedFailing.sort {it.displayName}
failing = failing.sort {it.displayName}
startedPassing = startedPassing.sort()
} %>
Source link : http://techkriti.wordpress.com/2008/08/30/using-groovy-with-hudson-to-send-rich-text-email/
When the build succeeds (which should be always, since it only contains the unittests), hudson will not send emails, even when some tests fail.
I don't know if this is something you want to fix, but if you use the argument
-Dmaven.test.failure.ignore=false
Then Hudson will fail your build if a test fails.
With email-ext, I could send emails when it becomes unstable, but since browser unittests are somewhat flaky, I do not want them at the first failure, more something like 3 in a row or 80% of the last x-Minutes/runs
Your unit tests are minutes/runs? Is this more a performance test than a Unit Test? If it's less a Unit Test and more a Performance / Load Test, we've used JMeter (Hudson has a plugin, as does Maven) with great effect, which allows us to set % when to set the build as unstable or failed.
It sounds like you need two jobs in hudson. One for unittests and one for selenium.
You want the first job to build and run the unittests and have hudson report on the unittests.
In the configuration under "post build actions" you can add a "project to build" and specify your job that builds and runs selenium and reports on those results.
This way you can tweak the thresholds for emails for unit tests to be far more strict than your selenium results.
Related
I have a need for a uniqueID within my Django code. I wrote a simple model like this
class UniqueIDGenerator(models.Model):
nextID = models.PositiveIntegerField(blank=False)
#classmethod
def getNextID(self):
if(self.objects.filter(id=1).exists()):
idValue = self.objects.get(id=1).nextID
idValue += 1
self.objects.filter(id=1).update(nextID=idValue)
return idValue
tempObj = self(nextID=1)
tempObj.save()
return tempObj.nextID
Then I wrote a unit test like this:
class ModelWorking(TestCase):
def setUp(self):
return None
def test_IDGenerator(self):
returnValue = UniqueIDGenerator.getNextID()
self.assertEqual(returnValue, 1)
returnValue = UniqueIDGenerator.getNextID()
self.assertEqual(returnValue, 2)
return None
When I run this test by itself, it runs fine. No issues.
When I run this test as a suite, which includes a bunch of other unit tests as well (which include calls to getNextID() as well), this test fails. The getNextID() always returns 1. Why would that be happening?
I figured it out.
Django runs each test in a transaction to provide isolation. Doc link.
Since my other tests make a call to getNextID(), the first row gets deleted after the first test that makes such a call is complete. Subsequent tests never find (id=1), due to which all subsequent calls return the value 1.
Even though I don't think I would face that situation in production, I went I ahead and changed my code to use .first() instead of (id=1). Like this
def getNextID(self):
firstRow = self.objects.first()
if(firstRow):
That way I believe it would better handle any future scenario when the database table might be emptied.
Loading spacy models slows down running my unit tests. Is there a way to mock spacy models or Doc objects to speed up unit tests?
Example of a current slow tests
import spacy
nlp = spacy.load("en_core_web_sm")
def test_entities():
text = u"Google is a company."
doc = nlp(text)
assert doc.ents[0].text == u"Google"
Based on the docs my approach is
Constructing the Vocab and Doc manually and setting the entities as tuples.
from spacy.vocab import Vocab
from spacy.tokens import Doc
def test()
alphanum_words = u"Google Facebook are companies".split(" ")
labels = [u"ORG"]
words = alphanum_words + [u"."]
spaces = len(words) * [True]
spaces[-1] = False
spaces[-2] = False
vocab = Vocab(strings=(alphanum_words + labels))
doc = Doc(vocab, words=words, spaces=spaces)
def get_hash(text):
return vocab.strings[text]
entity_tuples = tuple([(get_hash(labels[0]), 0, 1)])
doc.ents = entity_tuples
assert doc.ents[0].text == u"Google"
Is there a cleaner more Pythonic solution for mocking spacy objects for unit tests for entities?
This is a great question actually! I'd say your instinct is definitely right: If all you need is a Doc object in a given state and with given annotations, always create it manually wherever possible. And unless you're explicitly testing a statistical model, avoid loading it in your unit tests. It makes the tests slow, and it introduces too much unnecessary variance. This is also very much in line with the philosophy of unit testing: you want to be writing independent tests for one thing at a time (not one thing plus a bunch of third-party library code plus a statistical model).
Some general tips and ideas:
If possible, always construct a Doc manually. Avoid loading models or Language subclasses.
Unless your application or test specifically needs the doc.text, you do not have to set the spaces. In fact, I leave this out in about 80% of the tests I write, because it really only becomes relevant when you're putting the tokens back together.
If you need to create a lot of Doc objects in your test suite, you could consider using a utility function, similar to the get_doc helper we use in the spaCy test suite. (That function also shows you how the individual annotations are set manually, in case you need it.)
Use (session-scoped) fixtures for the shared objects, like the Vocab. Depending on what you're testing, you might want to explicitly use the English vocab. In the spaCy test suite, we do this by setting up an en_vocab fixture in the conftest.py.
Instead of setting the doc.ents to a list of tuples, you can also make it a list of Span objects. This looks a bit more straightforward, is easier to read, and in spaCy v2.1+, you can also pass a string as a label:
def test_entities(en_vocab):
doc = Doc(en_vocab, words=["Hello", "world"])
doc.ents = [Span(doc, 0, 1, label="ORG")]
assert doc.ents[0].text == "Hello"
If you do need to test a model (e.g. in the test suite that makes sure that your custom models load and run as expected) or a language class like English, put them in a session-scoped fixture. This means that they'll only be loaded once per session instead of once per test. Language classes are lazy-loaded and may also take some time to load, depending on the data they contain. So you only want to do this once.
# Note: You probably don't have to do any of this, unless you're testing your
# own custom models or language classes.
#pytest.fixture(scope="session")
def en_core_web_sm():
return spacy.load("en_core_web_sm")
#pytest.fixture(scope="session")
def en_lang_class():
lang_cls = spacy.util.get_lang_class("en")
return lang_cls()
def test(en_lang_class):
doc = en_lang_class("Hello world")
Currently I am investigating using graphene to build my Web server API. I have been using Django-Rest-Framework for quite a while and want to try something different.
I have figured out how to wire it up with my existing project and I can test the query from Graphiql UI, by typing something like
{
industry(id:10) {
name
description
}
}
Now, I want to have the new API covered by Unit/integration tests. And here the problem starts.
All the documentation/post I am checking on testing query/execution on graphene is doing something like
result = schema.execute("{industry(id:10){name, description}}")
assertEqual(result, {"data": {"industry": {"name": "Technology", "description": "blab"}}}
My point is that the query inside execute() is just a big chunk of text and I don't know how I can maintain it in the future. I or other developer in the future has to read that text, figure out what it means and update it if needed.
Is that how this supposed to be? How do you guys write unit test for graphene?
I've been writing tests that do have a big block of text for the query, but I've made it easy to paste in that big block of text from GraphiQL. And I've been using RequestFactory to allow me to send a user along with the query.
from django.test import RequestFactory, TestCase
from graphene.test import Client
def execute_test_client_api_query(api_query, user=None, variable_values=None, **kwargs):
"""
Returns the results of executing a graphQL query using the graphene test client. This is a helper method for our tests
"""
request_factory = RequestFactory()
context_value = request_factory.get('/api/') # or use reverse() on your API endpoint
context_value.user = user
client = Client(schema) # Note: you need to import your schema
executed = client.execute(api_query, context_value=context_value, variable_values=variable_values, **kwargs)
return executed
class APITest(TestCase):
def test_accounts_queries(self):
# This is the test method.
# Let's assume that there's a user object "my_test_user" that was already setup
query = '''
{
user {
id
firstName
}
}
'''
executed = execute_test_client_api_query(query, my_test_user)
data = executed.get('data')
self.assertEqual(data['user']['firstName'], my_test_user.first_name)
...more tests etc. etc.
Everything between the set of ''' s ( { user { id firstName } } ) is just pasted in from GraphiQL, which makes it easier to update as needed. If I make a change that causes a test to fail, I can paste the query from my code into GraphQL, and will often fix the query and paste a new query back into my code. There is purposefully no tabbing on this pasted-in query, to facilitate this repeated pasting.
I have two different tests, and both are failing when run with other tests. I'm going to display one of them here. This test is for testing that synonyms are working. I've got the following synonyms in my synonym.txt file:
knife, machete
bayonet, dagger, sword
the unit test looks like this:
def test_synonyms(self):
"""
Test that synonyms are working
"""
user = UserFactory()
SubscriberFactory.create(user=user)
descriptions = [
'bayonet',
'dagger',
'sword',
'knife',
'machete'
]
for desc in descriptions:
ListingFactory.create(user=user,
description="Great {0} for all of your undertakings".format(desc))
call_command('update_index', settings.LISTING_INDEX, using=[settings.LISTING_INDEX])
self.sqs = SearchQuerySet().using(settings.LISTING_INDEX)
self.assertEqual(self.sqs.count(), 5)
# 3 of the 5 are in one group, 2 in the other
self.assertEqual(self.sqs.auto_query('bayonet').count(), 3)
self.assertEqual(self.sqs.auto_query('dagger').count(), 3)
self.assertEqual(self.sqs.auto_query('sword').count(), 3)
# 2 of the 5 in this group
self.assertEqual(self.sqs.auto_query('knife').count(), 2)
self.assertEqual(self.sqs.auto_query('machete').count(), 2)
The problem is that when I run the test in isolation with the command ./manage.py test AnalyzersTestCase.test_synonyms it works fine. But if I run it along with other tests, it fails, returning 1 result where it should return 3. If I run a raw elasticsearch query at that point, elasticsearch returns 1 result. So it must be something in the setup of the index... but I'm deleting the index in the setup() method, so I don't see how it can be in a different state when run in isolation from when it's run alongside other tests.
Any help you can give would be great.
Figured it out...
Haystack's connections singleton needed to be cleared between tests, so:
import haystack
for key, opts in haystack.connections.connections_info.items():
haystack.connections.reload(key)
call_command('clear_index', interactive=False, verbosity=0)
I'm writing unit tests for a celery task using django-nose. It's fairly typical; a blank test database (REUSE_DB=0) that is pre-populated via a fixture at test time.
The problem I have is that even though the TestCase is loading the fixture and I can access the objects from the test method, the same query fails when executed within an async celery task.
I've checked that the settings.DATABASES["default"]["name"] are the same both in the test method and the task under test. I've also validated the that the task that's under test behaves correctly when invoked as a regular method call.
And that's about where I'm out of ideas.
Here's a sample:
class MyTest(TestCase):
fixtures = ['test_data.json']
def setUp(self):
settings.CELERY_ALWAYS_EAGER = True # seems to be required; if not I get socket errors for Rabbit
settings.CELERY_EAGER_PROPAGATES_EXCEPTIONS = True # exposes errors in the code under test.
def test_city(self):
self.assertIsNotNone(City.objects.get(name='brisbane'))
myTask.delay(city_name='brisbane').get()
# The following works fine: myTask('brisbane')
from celery.task import task
#task()
def myTask(city_name):
c = City.objects.count() # gives 0
my_city = City.objects.get(name=city_name) # raises DoesNotExist exception
return
This sounds a lot like a bug in django-celery 2.5 which was fixed in 2.5.2: https://github.com/celery/django-celery/pull/116
The brief description of the bug is that the django-celery loader was closing the DB connection prior to executing the task even eager tasks. Since the tests run inside a transaction the new connection for the task execution can't see the data created in the setUp.