Django test organization questions - django

In the interest of decoupled code, I've created several apps in my project that can exist without the others being there. Any app can be removed from the project without breaking things later down the line.
To do this, I've created some tests that make use of the #override_settings decorator in Django 1.4, however, I would like to test the functionality of the apps and their interaction together.
So, I would like to have tests that do not make the apps depend on each other, but I would also like to have tests that test the project as a whole. Where is the normal place to store these? Are there any tricks to doing this?

I'm not aware of any established convention, but what I usually do is that I create an app called, for example, tests and place the higher-level, integration tests there.

Related

Can autotests reuse code of the application?

Unit tests are strongly connected to the existing code of the application, written by developers. But what about UI and API automated tests (integration tests)? Does anybody think, that it's acceptable to re-use code of the application in separate automation solution?
The answer would be no. UI tests follow the UI, go to that page, input that value in that textbox, press that button, I should see this text. You don't need anything code related for this. All this should be done against some acceptance criteria so you should already know what to expect, without looking at any code.
For the API integration tests you would call the endpoints, with some payloads and then check the results. You don't need references to any code for this. The API should be documented and explain very well what endpoints are available, what the payloads look like and what you can expect to get back.
I am not sure why you'd think about reusing application code inside an automation project.
Ok, so after clarifications, you're talking about reusing models only, not actual code. This is not a bad idea, it can actually help, as long as these nuget packages do not bring in any other dependencies.
Code Reusability is a great concept but it's very hard to get right in practice. Models typically come with adnotations which require other packages which are of course not needed in an automation project. So, if you can get nuget packages without extra dependencies, so literally data models only and nothing else then that does work. Anything more than that and it will create issues so I'd push back on that

What is the equivalent of autotest/guard for django

When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?

Managing Django test isolation for installable apps

I maintain an installable Django app that includes a regular test suite.
Naturally enough when project authors run manage.py test for their site, the tests for both their own apps and also any third party installed apps such as mine will all run.
The problem that I'm seeing is that in several different cases, the user's particular settings.py will contain configurations that cause my app's tests to fail.
A couple of examples:
Some of the tests need to check for returned error messages. These error messages use the internationalization framework, so if the site language is not english then these tests fail.
Some of the tests need to check for particular template output. If the site is using customized templates (which the app supports) then the tests will end up using their customized templates in preference to the defaults, and again the tests will fail.
I want to try to figure out a sensible approach to isolating the environment that my tests get run with in order to avoid this.
My plan at the moment is to have all my TestCase classes extend a base TestCase, which overrides the settings, and any other environment setup I may need to take care of.
My questions are:
Is this the best approach to app-level test-environment isolation? Is there an alternative I've missed?
It looks like I can only override a setting at a time, when ideally I'd probably like a completely clean configuration. Is there be a way to do this, and if not which are the main settings I need to make sure are set in order to have a basic clean setup?
I believe I'm correct in saying that overriding some settings such as INSTALLED_APPS may not actually affect the environment in the expected way due to implementation details, and global state issues. Is this correct? Which settings do I need to be aware of, and what globally cached environment information may not be affected as expected?
What other environment state other than settings might I need to ensure is clean?
More generally, I'd also be interested in any context regarding how much of an issue this is for other third party installable apps, or if there are any plans to further address any of this in core. I've seen conversation on IRC regarding similar issues with eg. some of Django's contrib apps running under unexpected settings configurations. I seem to also remember running into similar cases with both third party apps and django contrib apps a few times, so it feels like I'm not alone in facing these kind of problems, but it's not clear if there's a consensus on if this is something that needs more work or if the status quo is good enough.
Note that:
These are integration-level tests, so I want to address these environment issues at the global level.
I need to support Django 1.3, but can put in some compatibility wrappers so long as I'm not re-implementing massive amounts of Django code.
Obviously enough, since this is an installable app, I can't just specify my own DJANGO_SETTINGS_MODULE to be used for the tests.
A nice approach to isolation I've seen used by Jezdez is to have a submodule called my_app.tests which contains all the test code (example). This means that those tests are NOT run by default when someone installs your app, so they don't get random phantom test failures, but if they want to check that they haven't inadvertently broken something then it's as simple as adding myapp.tests to INSTALLED_APPS to get it to run.
Within the tests, you can do your best to ensure that the correct environment exists using override_settings (if this isn't in 1.4 then there's not that much code to it). Personally my feeling is that with integration type tests perhaps it doesn't matter if they fail. If you like, you can include a clean settings file (compressor.test_settings), which for a major project may be more appropriate.
An alternative is that you separate your tests out a bit - there are two separate bodies of tests for contrib.admin, those at django.contrib.admin.tests, and those at tests.regression_tests.contrib.admin (or some path like that). The ones to check public apis and core functionality (should) reside in the first, and anything likely to get broken by someone else's (reasonable) configuration resides in the second.
IMHO, the whole running external apps tests is totally broken. It certainly shouldn't happen by default (and there are discussions to that effect) and it shouldn't even be a thing - if someone's external app test suite is broken by my monkey patching (or whatever) I don't actually care - and I definitely don't want it to break the build of my site. That said, the above approaches allow those who disagree to run them fairly easily. Jezdez probably has as many major pluggable apps as anyone else, and even if there are some subtle issues with his approach at least there is consistency of behaviour.
Since you're releasing a reusable third-party application, I don't see any reason the developer using the application should be changing the code. If the code isn't changing, the developers shouldn't need to run your tests.
The best solution, IMO, is to have the tests sit outside of the installable package. When you install Django and run manage.py tests, you don't run the Django test suite, because you trust the version of Django you've installed is stable. This should be the same for developers using your third-party application.
If there are specific settings you want to ensure work your library, just write test cases that use those settings values.
Here's an example reusable django application that has the tests sit outside of the installed package:
https://github.com/Yipit/django-roughage/tree/master
It's a popular way to develop python modules as seen:
https://github.com/kennethreitz/requests
https://github.com/getsentry/sentry

Unit Testing ASP.NET MVC3 Applications (with NHibernate)

I'm just starting my first MVC3 application, and I'm not sure how to unit-test it. I was planning to break out helper classes (static helpers, usually) into a separate assembly, as well as model classes, so that I could test them with NUnit.
So I'm OK on helper classes; but how do I test model classes (considering that they're annotated for NHibernate and tied to the database), and how can I test my views and controllers?
What are the specific tools and techniques I need to test NHibernate-bound models, as well as ASP.NET views and controllers? I'm not sure. NUnit only solves some of the problem.
Edit: Here's some samples of code -- I'm not on my dev machine right now, so I don't have real code to show-case.
Models: Anything from the ActiveRecord documentation
Controllers: The standard HomeController from the MVC3 documentation
Views: Any strongly-typed view (let's say Create) generated from the right-click context menu (Add > View)
Specific questions:
How I can test saving models without actually saving to the main/production database
Scope for testing views; should I simply test fields exist? What about validation error messages?
Controllers: scope for testing. Should I test that actions touch and deform database data as expected (eg. /get/id gets that object; /delete/id deletes that object)?
You can get there by various kinds of tests, but you need to apply them wisely depending on what you are going to test:
Use unit test to test your controllers, or your business logic, without hitting the database.
Use integration testing by running on an in-memory database (which NHibernate supports and is easy to setup). You can make sure a scenario actually works, e.g. using a valid scenario, all the business logic is working, your controller pass the data to persistence mechanism and it goes correctly into the database.
You can use UI Testing using frameworks such as Selenium but only do that where really needed, because it is not easy as two previous types of tests, and would become hard to maintain and fragile.
It is a best practice to keep your view (UI) thin, and test other layers behind the UI, as testing the UI probably does not worth all the hassle.

Setting up proper testing for Django for TDD

I've been ignoring the need to test my project for far to long.
So I spent more than a day looking for ways to implement tests for my current apps and trying to get some TDD going for new apps.
I found a lot of "tutorials" with the steps: "1. Install this 2. Install that 3. Install thisnthat 4. Done!",
but noone seems to talk about how to structure your tests, both file and code wise.
And noone ever talks about how to set up a CI server, or just integrate the testing with the deployment of your project.
A lot of people mention fabric, virtualenv and nose - but noone describes how they work with them together as a whole.
What I keep finding is detailed information about how you set up a proper Rails environment with testing and CI etc...
Does anyone else feel that the Django community lacks in this area, or is it just me? :)
Oh, and does anyone else have any suggestions on how to do it?
As I see it, there are several parts to the problem.
One thing you need are good unit tests. The primary characteristic of unit tests is that they are very fast, so that they can test the combinatorial possibilities of function inputs and branch coverage. To get their speed, and to maintain isolation between tests, even if they are running in parallel, unit tests should not touch the database or network or file system. Such tests are hard to write in Django projects, because the Django ORM makes it so convenient to scatter database access calls throughout your product code. Hence any tests of Django code will inevitably hit the database. Ideally, you should approach this by limiting the database access in your product code to a thin data access layer built on top of the django ORM, which exposes methods pertinent to your application. Another approach is for your tests to mock out the ORM calls. In the worst case, you will give up on this: Your unit tests become integration tests: They actually hit the database, cross multiple layers of your architecture, and take minutes to run, which discourages developers from running them frequently enough.
The implication of this is that writing integration tests is easy - the canonical style of Django tests covers this perfectly.
The final, and hardest part of the problem, is running your acceptance tests. The defining characteristic of acceptance tests is that they invoke your application end-to-end, as a user does in production, to prove that your application actually works. Canonical dhango tests using the django testrunner fall short of this. They do not issue actually HTTP requests (instead, they examine the url config to figure out what middleware and view would get called to handle a particular request, and then they call it, in process.) This means that such tests are not testing your webserver config, nor any javascript, or rendering in the browser, etc. To test this, you need something like selenium.
Additionally, we have many server-side processes, such as cron jobs, which use code from our Django project. Acceptance tests which involve these processes should invoke the jobs just like cron does, as a new process.
Both these scenarios have some problems. Firstly, you can't simply run such tests under the Django test runner. If you try to do so, then you will find that the test data you have written during the test setup (either using the django fixtures mechanism, or by simply calling "MyModel().save()" in a test) are in a transaction which your product code, running in a different process, is not party to. So your tests have to commit the changes they make before the product code can see them. This interferes with the clean-up between tests that Django's testrunner helpfully does, so you have to switch it into a different mode, which does explicit deletes rather than rolling back. Sadly, this is much slower. At last night's London Django user group, a couple of Django core developers assured me that this scenario also has other complications (which I confess I don't know what they are), which it is best to avoid by not running acceptance tests within the Django test runner at all, but creating them as a completely stand-alone set of tests.
If you do this, then your immediate problem is that you have lost the benefits the Django test runnner provides: Namely it creates a test database, and cleans it between each test. You will have to create some equivalent mechanism for yourself. You will need your product code to run against a test database if it is being invoked as part of a test. You need to be absolutely certain that if product code is run as part of a test, even on a production box, then it can NEVER accidentally touch the production database, so this mechanism has to be absolutely failsafe. Forgetting to set an environment variable in a test setup, for example, should not cause a blooper in this regard.
This is all before even considering the complications that arise from deployment, having parts of your project in different repos, dependent on each other, creating pip-installable packages, etc.
All in all, I'd love to hear from someone who feels they have found a good solution to this problem. It is far from a trivial issue as some commenters imply.
Harry Percival is creating a Django / TDD / Selenium tutorial (and accompanying workshop, if you live in London.) His book reads like a hands-on tutorial, and goes into great detail on the subject:
https://www.obeythetestinggoat.com/book/part1.harry.html
In my experience, fine-grained unit tests for web apps are not worth it, the setup/teardown is too expensive and the tests are too fragile. The only exception is isolated components, especially those with clear inputs & outputs and complicated algorithms. Do unit-test those to the smallest details.
I had the best testing experience using a semi-functional testing tool called testbrowser, which simulates browser actions in Python. For integration with Django, install the homophony app (disclaimer: I am the author of the app).
Testbrowser may be a little too coarse for test-driven development, but it's the best testing tool of the ones I have used so far. Most importantly, it scales up fairly well, whereas unit tests and browser-based functional test tools tend to become very brittle as your app grows in size.
As for a CI tool, go with Buildbot or Jenkins.
I use a combination of Django's excellent extension of the python unittest framework for testing api's / models / helper functions, and selenium for in browser testing. Selenium has great instructions for how to set it up and write tests in python.