In Django (on Google App Engine), should I call main.py when running Unit Tests? - django

I have a Django application on the Google App Engine, and I would like to start writing unit tests. But I am not sure how to set-up my tests.
When I run my tests, I get the following error:
EnvironmentError: Environment variable DJANGO_SETTINGS_MODULE is undefined.
ERROR: Module: tests could not be imported.
This seems pretty straight forward - my django settings have not been initialized. Setup of th django environment on Google App Engine happens in main.py (specified in app.yaml), but this does obviously not get called for unit tests. Should my unit tests start by calling main() in main.py? I am not sure.

You should probably just export the environment variable in the main entry point into your tests. Depending on your setup, you can probably just do that by importing your main.py file, but it's probably just as easy to add the os.environ['DJANGO_SETTINGS_MODULE'] line to the file you use to run your tests.
This might be a little hard depending on how you've got your tests set up. Are you using a testrunner like nose or Django's test suite tools?

My ultimate solution to this issue was to add the os.environ['DJANGO_SETTINGS_MODULE'] line IMMEDIATELY before using the only real Django function I use, template.render_to_string().
I kept having issues with it getting unset when I had it in the header of a given .py, so I realized just setting it each time would insure it'll always be right.
What a frustrating problem. I really wish there was a simple setting somewhere (perhaps in app.yaml) that would pick the Django version, and set this variable right.
Oh well.

Related

'meteor test' not picking up *.test[s].js

I'm on Meteor 1.7.0.3 and want to write unittests.
I have the standard tests/main.js with a few tests which runs for meteor test --driver-package meteortesting:mocha --once from the command line.
However, code in a new file named my.tests.js is not picked up, no matter where I put it.
The Meteor testing guide explicitly states
Does eagerly load any file in our application (including in imports/ folders) that look like .test[s]., or .spec[s].
Is there some configuration that I have missed?
By default Meteor sets
"testModule": "tests/main.js"
in package.json. This defines the entry point for meteor test. This is why the tests in it gets run, contrary to what the testing guide indicates.
By removing this configuration, Meteor starts to behave as documented in the testing guide.

What is the equivalent of autotest/guard for django

When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?

Using WebStorms IDE is it possible to run only one unit test from a unit test suite?

When using WebStorms as a test runner every unit test is run. Is there a way to specify running only one test? Even only running one test file would be better than the current solution of running all of them at once. Is there a way to do this?
I'm using Mocha.
not currently possible, please vote for WEB-10067
You can double up the i on it of d on describe and the runner will run only that test/suite. If you prefix it with x it will exclude it.
There is a plugin called ddescribe that gives you a gui for this.
You can use the --grep <pattern> command-line option in the Extra Mocha options box on the Mocha "Run/Debug Configurations" screen. For example, my Extra Mocha options line says:
--timeout 5000 --grep findRow
All of your test *.js files, and the files they require, still get loaded, but the only tests that get run are the ones that match that pattern. So if the parts you don't want to execute are tests, this helps you a lot. If the slow parts of your process automatically get executed when your other modules get loaded with require, this won't solve that problem. You also need to go into the configuration options to change the every time you want to run tests matching a different pattern, but this is quick enough that it definitely saves me time vs. letting all my passing tests run every time I want to debug one failing test.
You can run the tests within a scope when you have a Mocha config setting by using .only either on the describe or on the it clauses
I had some problems getting it to work all the time, but when it went crazy and kept running all my tests and ignoring the .only or .skip I added to the extra mocha options the path to one of the files containing unit tests just like in the example for node setup and suddenly the .only feature started to work again regardless of the file the tests were situated in.

Django unit tests failing when run with other test cases

I'm getting inconsistent behavior with Django unit tests. On my development machine using sqlite, if I run tests on my two apps separately the tests pass, but if I run manage.py test to test everything at once, I start getting unit test failures consistently on two tests.
On my staging server which uses Postgres, I have a particular test that works when testing it individually (e.g. manage.py test MyApp.tests.MyTestCase.testSomething), but fails when running the entire test case (e.g. manage.py test MyApp.tests.TestCase).
Other related StackOverflow questions seem to have two solutions:
Use Django TestCase's instead of the Python equivalent
Use TransactionTestCase's to make sure the database is cleaned up properly after every test.
I've tried both to no avail. Out of frustration, I also tried using django-nose instead, but I was seeing the same errors. I'm on Django 1.6.
I just spent all day debugging a similar problem. In my case, the issue was as follows.
In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
This took me a long time to debug, so I thought I would share my solution. I hope it works for you too. You may not be using mocks, in which case this probably won't help you, but I hope it will help someone.
Besides using TestCase for all your tests, you need to make sure you tear down any patching that was done in your setup methods:
def setUp(self):
self.patcher = patch('my.app.module')
def tearDown(self):
self.patcher.stop()
I had the same thing happening today with a series of tests. I had 23 regular django.test.TestCase tests and then one django.contrib.staticfiles.testing.StaticLiveServerTestCase test. It was that final test that would always fail when ran with the rest of them but pass on its own.
Solution
In the 23 regular TestCase tests I really had implemented a subclass of the regular TestCase so that I could provide some common functionality specific to my application to the tests. In the tearDown methods I had failed to call the super method. Once I called the super method in the tearDown method, it worked. So the lesson here was to check to make sure you are cleaning up your methods.

Managing Django test isolation for installable apps

I maintain an installable Django app that includes a regular test suite.
Naturally enough when project authors run manage.py test for their site, the tests for both their own apps and also any third party installed apps such as mine will all run.
The problem that I'm seeing is that in several different cases, the user's particular settings.py will contain configurations that cause my app's tests to fail.
A couple of examples:
Some of the tests need to check for returned error messages. These error messages use the internationalization framework, so if the site language is not english then these tests fail.
Some of the tests need to check for particular template output. If the site is using customized templates (which the app supports) then the tests will end up using their customized templates in preference to the defaults, and again the tests will fail.
I want to try to figure out a sensible approach to isolating the environment that my tests get run with in order to avoid this.
My plan at the moment is to have all my TestCase classes extend a base TestCase, which overrides the settings, and any other environment setup I may need to take care of.
My questions are:
Is this the best approach to app-level test-environment isolation? Is there an alternative I've missed?
It looks like I can only override a setting at a time, when ideally I'd probably like a completely clean configuration. Is there be a way to do this, and if not which are the main settings I need to make sure are set in order to have a basic clean setup?
I believe I'm correct in saying that overriding some settings such as INSTALLED_APPS may not actually affect the environment in the expected way due to implementation details, and global state issues. Is this correct? Which settings do I need to be aware of, and what globally cached environment information may not be affected as expected?
What other environment state other than settings might I need to ensure is clean?
More generally, I'd also be interested in any context regarding how much of an issue this is for other third party installable apps, or if there are any plans to further address any of this in core. I've seen conversation on IRC regarding similar issues with eg. some of Django's contrib apps running under unexpected settings configurations. I seem to also remember running into similar cases with both third party apps and django contrib apps a few times, so it feels like I'm not alone in facing these kind of problems, but it's not clear if there's a consensus on if this is something that needs more work or if the status quo is good enough.
Note that:
These are integration-level tests, so I want to address these environment issues at the global level.
I need to support Django 1.3, but can put in some compatibility wrappers so long as I'm not re-implementing massive amounts of Django code.
Obviously enough, since this is an installable app, I can't just specify my own DJANGO_SETTINGS_MODULE to be used for the tests.
A nice approach to isolation I've seen used by Jezdez is to have a submodule called my_app.tests which contains all the test code (example). This means that those tests are NOT run by default when someone installs your app, so they don't get random phantom test failures, but if they want to check that they haven't inadvertently broken something then it's as simple as adding myapp.tests to INSTALLED_APPS to get it to run.
Within the tests, you can do your best to ensure that the correct environment exists using override_settings (if this isn't in 1.4 then there's not that much code to it). Personally my feeling is that with integration type tests perhaps it doesn't matter if they fail. If you like, you can include a clean settings file (compressor.test_settings), which for a major project may be more appropriate.
An alternative is that you separate your tests out a bit - there are two separate bodies of tests for contrib.admin, those at django.contrib.admin.tests, and those at tests.regression_tests.contrib.admin (or some path like that). The ones to check public apis and core functionality (should) reside in the first, and anything likely to get broken by someone else's (reasonable) configuration resides in the second.
IMHO, the whole running external apps tests is totally broken. It certainly shouldn't happen by default (and there are discussions to that effect) and it shouldn't even be a thing - if someone's external app test suite is broken by my monkey patching (or whatever) I don't actually care - and I definitely don't want it to break the build of my site. That said, the above approaches allow those who disagree to run them fairly easily. Jezdez probably has as many major pluggable apps as anyone else, and even if there are some subtle issues with his approach at least there is consistency of behaviour.
Since you're releasing a reusable third-party application, I don't see any reason the developer using the application should be changing the code. If the code isn't changing, the developers shouldn't need to run your tests.
The best solution, IMO, is to have the tests sit outside of the installable package. When you install Django and run manage.py tests, you don't run the Django test suite, because you trust the version of Django you've installed is stable. This should be the same for developers using your third-party application.
If there are specific settings you want to ensure work your library, just write test cases that use those settings values.
Here's an example reusable django application that has the tests sit outside of the installed package:
https://github.com/Yipit/django-roughage/tree/master
It's a popular way to develop python modules as seen:
https://github.com/kennethreitz/requests
https://github.com/getsentry/sentry