Django test database not auto-flushing - django

I have a bunch of unit test files, all of which consist of django.test.TestCase classes.
Wrote myself a little shell script to uncomment/comment test file imports in my __init__.py file, so I can run tests from certain test files, based off the command line arguments I give it. I am also able to run all the tests of all the test files in one go (for regression testing purposes).
I have this one test file that has some JSON fixtures and the first test checks that a certain model/table has 3 records in it (defined by the JSON fixture).
So here is the problem: when I run this test file on its own its tests pass with flying colours, but when I run the test with all other tests, that particular test case I mentioned, fails.
When I run all the tests, the database says there are 6 records in the table/model, but there should only be 3 (from the fixture), like when the test file is run by itself.
I also tried running the that test file with a few other test files (not all) and it passes. So the only time it doesn't, is when all the test files are run.
To me this seems like a bug in Django or PostgreSQL (DB I am using), because aren't Django TestCases supposed to auto-flush/reset the database between each test method, let alone test class?

This is likely due to the difference in how cleanup is done between TestCase and TransactionTestCase in Django. Before Django 1.5 TransactionTestCases needed to be run after TestCases (and Djangos testunner did that for you). This should be fixed in 1.5 though, so try running your tests again there...

Related

Run a single groovy Junit test from command line

On a few simple groovy classes I included the junit test in with the class--If you annotate the test methods with #Test (from Junit) and run the main-less class with "groovy MyClass.groovy" it automatically runs the unit tests.
I like this because It really requires zero overhead (no additional files or junk code, just one annotation).
Question is, can I tell it to run a single test method? I tried "groovy MyClass.groovy myMethod" but I didn't really expect that to work. I also tried -Dtest=myMethod which I also didn't have much hope for.
Is there a trick someone knows? I suppose since it is just a .groovy file I could comment out the tests I don't want to run or add a main that calls the various tests, but I'm just wondering if there is a way to leverage this automatic run JUnit tests thing already built into groovy.

What is the equivalent of autotest/guard for django

When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?

Using WebStorms IDE is it possible to run only one unit test from a unit test suite?

When using WebStorms as a test runner every unit test is run. Is there a way to specify running only one test? Even only running one test file would be better than the current solution of running all of them at once. Is there a way to do this?
I'm using Mocha.
not currently possible, please vote for WEB-10067
You can double up the i on it of d on describe and the runner will run only that test/suite. If you prefix it with x it will exclude it.
There is a plugin called ddescribe that gives you a gui for this.
You can use the --grep <pattern> command-line option in the Extra Mocha options box on the Mocha "Run/Debug Configurations" screen. For example, my Extra Mocha options line says:
--timeout 5000 --grep findRow
All of your test *.js files, and the files they require, still get loaded, but the only tests that get run are the ones that match that pattern. So if the parts you don't want to execute are tests, this helps you a lot. If the slow parts of your process automatically get executed when your other modules get loaded with require, this won't solve that problem. You also need to go into the configuration options to change the every time you want to run tests matching a different pattern, but this is quick enough that it definitely saves me time vs. letting all my passing tests run every time I want to debug one failing test.
You can run the tests within a scope when you have a Mocha config setting by using .only either on the describe or on the it clauses
I had some problems getting it to work all the time, but when it went crazy and kept running all my tests and ignoring the .only or .skip I added to the extra mocha options the path to one of the files containing unit tests just like in the example for node setup and suddenly the .only feature started to work again regardless of the file the tests were situated in.

Django unit tests failing when run with other test cases

I'm getting inconsistent behavior with Django unit tests. On my development machine using sqlite, if I run tests on my two apps separately the tests pass, but if I run manage.py test to test everything at once, I start getting unit test failures consistently on two tests.
On my staging server which uses Postgres, I have a particular test that works when testing it individually (e.g. manage.py test MyApp.tests.MyTestCase.testSomething), but fails when running the entire test case (e.g. manage.py test MyApp.tests.TestCase).
Other related StackOverflow questions seem to have two solutions:
Use Django TestCase's instead of the Python equivalent
Use TransactionTestCase's to make sure the database is cleaned up properly after every test.
I've tried both to no avail. Out of frustration, I also tried using django-nose instead, but I was seeing the same errors. I'm on Django 1.6.
I just spent all day debugging a similar problem. In my case, the issue was as follows.
In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
This took me a long time to debug, so I thought I would share my solution. I hope it works for you too. You may not be using mocks, in which case this probably won't help you, but I hope it will help someone.
Besides using TestCase for all your tests, you need to make sure you tear down any patching that was done in your setup methods:
def setUp(self):
self.patcher = patch('my.app.module')
def tearDown(self):
self.patcher.stop()
I had the same thing happening today with a series of tests. I had 23 regular django.test.TestCase tests and then one django.contrib.staticfiles.testing.StaticLiveServerTestCase test. It was that final test that would always fail when ran with the rest of them but pass on its own.
Solution
In the 23 regular TestCase tests I really had implemented a subclass of the regular TestCase so that I could provide some common functionality specific to my application to the tests. In the tearDown methods I had failed to call the super method. Once I called the super method in the tearDown method, it worked. So the lesson here was to check to make sure you are cleaning up your methods.

How do I unit test UI via console

I know this has been asked many times but I want to be specific.
I use to use selenium. After googling it looks like I can run it via console and it gives me a bunch of text output but I rather not parse that and I want a pass/fail type of thing
Every once in a while I like to run all of my unit test on the UI not code. I don't want to submit a form with certain values, I want to see if I click this img does the dropbox beside it pop out and if i select a name will it be in the form which I'll then submit after running a few other things.
The reason I'd like this is certain features MUST ALWAYS be working so i'm ok with adjusting the unit test everytime I modify the UI for those feature. For the rest the unit test in code which checks business logic will be enough as those UIs are always changing or not very important.
It be nice if something can kickoff firefox and chrome (or webkit) but thats not required.
Like I said I'd like pass/fail, some kind of easy text to parse. Complex test is ok as I know regex but I don't want to figure out when one unit test ends or starts.
If you're using java/maven - I wrote a maven plugin for selenium that should do what you want:
https://github.com/willwarren/selenium-maven-plugin. You generate the tests in firefox + selenium, then save the files to a directory in your maven project.
If you're not using maven you can use the project that I built upon:
http://code.google.com/p/selenium4j
From the Readme:
We use selenium IDE to record our tests. We then saved the test cases into our project in the following fashion: (Note: currently the code from selenium4j only suports one level, so don't nest your folders)
./src/test/selenium
|-signin
|-LoginGoodPassword.html
|-LoginBadPassword.html
|-selenium4j.properties
We didn't save the test suites as maven takes care of finding your tests.
The selenium4j.properties contains setup information about:
# the web site being tested
webSite=http://yourwebapp:8080
# A comma separated values of the WebDrivers being used. Accepted drivers:
# HtmlUnitDriver, FirefoxDriver, ChromeDriver, InternetExplorerDriver
driver=FirefoxDriver
# How many times we want to iterate and test
loopCount=1
The selenium maven plugin, which is bound to the process-test-resources phase, then converts these html files into junit 4 tests in your src/test/java folder.
So you end up with:
./src/test/java
|-signin
|-firefox
|-LoginGoodPasswordTest.java
|-LoginBadPasswordTest.java