Optional test cases in Django - django

We have a moderately large test suite for business logic and this completes within a few seconds. We're running this as a condition to commit (a hook that must pass) and that has been working well to block the most stupid mistakes from making it off my machine.
We've recently started adding end-to-end frontend tests with webdriver. Some of these tests pass over third party integrations. The tests are useful but they're really slow and require a network connection.
We also have some logic tests that are extremely long that are commented out (yeah!) unless we suspect something wrong.
Is there a sensible way to split these slow tests out so they only run when we specifically want them to and not every time you run ./manage.py test?

If you use default Django test runner there is no simple way of doing what you want. Maybe rearranging the test directory structure so you could call ./manage.py test path/to/directory_with/webtests or ./manage.py test path/to/directory_with_fast_tests
Another solution is using pytest Custom Markers
As Documentation states:
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
Register custom marker:
# content of pytest.ini
[pytest]
markers =
webtest: mark a test as a webtest.
Then you just run pytest -v -m webtest and only marked tests will be executed.

Related

Is it possible to get an interactive django shell using the test database?

When running tests, you can do:
./manage.py test --keepdb
To run your tests, and keep the test database.
Is it possible to have the django shell actually connect to it, so we can interactively access the test database the same way the Django shell can normally work with the production database?
Note that the answer and its comments here imply that you can access it by doing something like:
from django import test
test.utils.setup_test_environment()
from django.db import connection
db = connection.creation.create_test_db(keepdb=True)
But when I do that, my database appears to be empty when I do queries.
I ran into this, at first I thought it was because the codebase I'm working on has a flush call in the teardown function, but my DB was still empty after removing those. Maybe there were more flushes somewhere I didn't catch.
I ended up getting around this by sleeping at the end of the test, so it doesn't exit and doesn't clean up.

How to *automate* browser based unit tests, locally or on Jenkins

Straight up been looking for answers to this question for months, still no idea how to actually do it - how does one automate tests that run in the browser? Selenium tests run on the backend and of course can be automated, and we are doing this now. Recently I have been tasked with automating browser based unit tests and I have a significant knowledge gap -
I don't know how to actually do it -
For example - how do does an automated test runner collect test results and exit codes of (unit) tests that run in the browser? Can anyone explain how this actually done and the steps to accomplish this?
Is Karma the best tool to accomplish this?
You can use http://phantomjs.org/. phantomjs is a headless webbrowser, which you can see as fullstack webbrowser without gui, usable as library. Together with karma you can execute your unit tests without relying on any GUI implementations.
Here a blogpost, which explains the different components in such a secenario http://orizens.com/wp/topics/my-setup-for-testing-js-with-jasmine-karma-phantomjs-angularjs/
Means you can execute your karma unit tests on a headless linux server.
Clarification:
The need for phantomjs doesn't come from unit tests. The need for phantomjs comes from the fact that your js unit tests depend onto the browser api.
Its a good design principle to structure the code, that the coupling to the browser api is not cluttered over all the code. Try to introduce a thin layer which encapsulates the browser api dependencies. Like this you can test your js mostly without the need for phantomjs.
To execute your unit tests with phantomjs can take its time. If you have a lot of unit tests, its better to factor out the dependencies to the browser api. So you can execute more tests without the need for phantomjs and only a minority of the unit tests need to be executed with phantomjs.
You can probably use cucumber. If you have 20 test cases, that you can need to execute.
You can create a feature file which will contain all the scenarios.
The Runner classes and Method on what needs to be done can be defined in a different package. Let's say you have a scenario to
1. Open browser.
2. Enter google link.
3. Login using credentials.
Create a feature file with the above information.
Use a Cucumber runner class. And create package methods such as
#When("^Open Browser$")
public void open_Browser() throws Throwable {
WebDriver driver = new FirefoxDriver();
driver.get("www.google.com");
}
similarly you can create different methods to run. To run the jar, you can use Command Line Interface technique.
Great piece of article LINK
Basics:
This is for Python automation, you'll need to have some previous knowledge/experience.
pip install selenium
pip install nose
Above should be executed in cmd or shell...
For this test we will open the AWeber website at http://www.aweber.com
using Firefox, and make sure that the title of the page is "AWeber Email Marketing Services & Software Solutions for Small Business".
import unittest
from selenium import webdriver
class AweberTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
def test_title(self):
self.driver.get('https://www.aweber.com')
self.assertEqual(
self.driver.title,
'AWeber Email Marketing Services & Software Solutions for Small Business')
#classmethod
def tearDownClass(cls):
cls.driver.quit()
Running the test using nose:
nose aweber.py
Next test, clicking on elements:
self.driver.get('https://www.aweber.com')
order_tab = self.driver.find_element_by_css_selector('#ordertab>a')
order_tab.click()
There are many selectors in that we can use find_element_by_(css/xpath/name/id) - Locating elements
And in this case we used the method click but we can also .send_keys("asdf") ,scroll, execute java script using
browser.execute_script("alert('I canNNNNN not do javascript')")
Full code example: LINK-Pastebin

Best practice for organizing selenium tests and unit tests

So I am experimenting with the introduction of selenium unit tests in django 1.4 in a couple of projects I am working on.
The standard way to run my unit tests are simply to do ./manage.py test and I use django-ignoretests to exclude specific django apps that I do not want tested (as needed).
However, is there a way to configure my project so that I can decide to run only selenium tests when I want to and have ./manage.py test run only standard unit tests.
What are some best practices for segregating and organizing selenium tests and standard unit tests?
You could always group all your selenium tests under a single package myapp/selenium_tests/ (as described here https://stackoverflow.com/a/5160779/1138710 for instance) and then run manage.py test myapp.selenium_tests and group the rest of tests under say myapp/other_tests.
Otherwise, I suppose you could write a test runner that checks for each test class whether it derives from LiveServerTestCase (see the docs: https://docs.djangoproject.com/en/dev/topics/testing/#defining-a-test-runner)
For the test classes in question, I added the following decorator:
from django.conf import settings
#unittest.skipIf(getattr(settings,'SKIP_SELENIUM_TESTS', False), "Skipping Selenium tests")
Then to skip those tests add to the settings file: SKIP_SELENIUM_TESTS = True
This could easily be wrapped into a subclass of LiveServerTestCase or a simple decorator. If I had that in more than one place, it would be already.

Django test to use existing database

I'm having a hard time customizing the test database setup behavior. I would like to achieve the following:
The test suites need to use an existing database
The test suite shouldn't erase or recreate the database instead load the data from a mysql dump
Since the db is populated from a dump, no fixtures should be loaded
Upon finishing tests the database shouldn't be destroyed
I'm having a hard time getting the testsuiterunner to bypass creation.
Fast forward to 2016 and the ability to retain the database between tests has been built into django. It's available in the form of the --keep flag to manage.py
New in Django 1.8. Preserves the test database between test runs. This
has the advantage of skipping both the create and destroy actions
which can greatly decrease the time to run tests, especially those in
a large test suite. If the test database does not exist, it will be
created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database
before running the test suite.
This pretty much fullfills all the criteria you have mentioned in your questions. In fact it even goes one step further. There is no need to import the dump before each and every run.
This TEST_RUNNER works in Django 1.3
from django.test.simple import DjangoTestSuiteRunner as TestRunner
class DjangoTestSuiteRunner(TestRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
You'll need to provide a custom test runner.
The bits your interested in overriding with the default django.test.runner.DiscoverRunner are the DiscoverRunner.setup_databases and DiscoverRunner.teardown_databases methods. These two methods are involved with creating and destroying test databases and are executed only once. You'll want to provide test-specific project settings that use your existing test database by default and override these so that the dump data is loaded and the test database isn't destroyed.
Depending on the size and contents of the dump, a safe bet might be to just create a subprocess that will pipe the dump to your database's SQL command-line interface, otherwise you might be able to obtain a cursor and execute queries directly.
If your looking to get rid of fixture loading completely, you can provide a custom base test case that extends Django's default django.test.testcases.TestCase with the TestCase._fixutre_setup and TestCase._fixutre_teardown methods overriden to be noop.
Caveat emptor: this runner will make it impossible to facilitate tests for anything but your application's sources. It's possible to customize the runner to create a specific alias for a connection to your existing database and load the dump, then provide a custom test case that overrides TestCase._database_names to point to it's alias.

How do I tell Django to save my test database?

Running Django unit tests is far too slow. Especially when I just want to run one test but the test runner wants to create the entire database and destroy the whole thing just for that one test.
In the case where I have not changed any of my models, I could save oodles of time if Django would not bother trying to create and destroy the entire database, and instead saved it for next time. Better yet, it would be great if the test runner was capable of being able to see which models have changed and only replacing those prior to running tests.
I'd prefer to not have to subclass the test runner myself, but that's what I'm going to have to do if I don't find a solution soon. is there anything like this already in existence?
In django1.8 added new parameter for manage.py test command --keepdb
./manage.py test --keepdb
Have you tried using an in-memory SQLite database for tests? It's much faster than using a disk-based database.
I'm using Djang-nose. If you set a env var REUSE_DB=1 it will not destroy the DB after running tests and re-use that same DB for the next run. Whenever your schema changes, just set REUSE_DB=0 and do one 'full' run. After that reset it to 1 and you're good to go.
https://github.com/django-nose/django-nose