So I am experimenting with the introduction of selenium unit tests in django 1.4 in a couple of projects I am working on.
The standard way to run my unit tests are simply to do ./manage.py test and I use django-ignoretests to exclude specific django apps that I do not want tested (as needed).
However, is there a way to configure my project so that I can decide to run only selenium tests when I want to and have ./manage.py test run only standard unit tests.
What are some best practices for segregating and organizing selenium tests and standard unit tests?
You could always group all your selenium tests under a single package myapp/selenium_tests/ (as described here https://stackoverflow.com/a/5160779/1138710 for instance) and then run manage.py test myapp.selenium_tests and group the rest of tests under say myapp/other_tests.
Otherwise, I suppose you could write a test runner that checks for each test class whether it derives from LiveServerTestCase (see the docs: https://docs.djangoproject.com/en/dev/topics/testing/#defining-a-test-runner)
For the test classes in question, I added the following decorator:
from django.conf import settings
#unittest.skipIf(getattr(settings,'SKIP_SELENIUM_TESTS', False), "Skipping Selenium tests")
Then to skip those tests add to the settings file: SKIP_SELENIUM_TESTS = True
This could easily be wrapped into a subclass of LiveServerTestCase or a simple decorator. If I had that in more than one place, it would be already.
Related
When testing my Apache Spark application, I want to do some integration tests. For that reason I create a local spark appliciation (with hive support enabled), in which the tests are executed.
How can I achieve that after each test, the derby metastore is cleared, so that the next test has a clean environment again.
What I don't want to do is restarting the spark application after each test.
Are there any best practices to achieve what I want?
I think that introduction of some application level logic for integration testing kind of breaks concept of integration testing.
From my point of view correct approach is to restart application for each test.
Anyway I believe another option is to start/stop SparkContext for each test. It should clean any relevant stuff.
UPDATE - answer to comments
Maybe it's possible to do a cleanup by deleting tables/files?
I would ask more general question - what do you want to test with your test?
In a software development is defined unit testing and integration testing. And nothing in between. If you desire to do something that is not integration and not unit test - then you're doing something wrong. Specifically, with your test you try to test something that is already tested.
For the difference and general idea of unit and integration tests you can read here.
I suggest you to rethink your testing and depending on what you want to test do either integration or unit test. For example:
To test application logic - unit test
To test that your application works in environment - integration test. But here you shouldn't test WHAT is stored in Hive. Only that the fact of storage is happened, because WHAT is stored shall be tested by unit test.
So. The conclusion:
I believe you need integration tests to achieve your goals. And the best way to do it - restart your application for each integration test. Because:
In real life your application will be started and stopped
In addition to your Spark stuff - you need to make sure that all your objects in code are correctly deleted/reused. Singletones, Persistent objects, Configurations.. - it all may interfere with your tests
Finally, the code that will perform integration tests - where is a guarantee, that it will not break production logic at some point?
We have a moderately large test suite for business logic and this completes within a few seconds. We're running this as a condition to commit (a hook that must pass) and that has been working well to block the most stupid mistakes from making it off my machine.
We've recently started adding end-to-end frontend tests with webdriver. Some of these tests pass over third party integrations. The tests are useful but they're really slow and require a network connection.
We also have some logic tests that are extremely long that are commented out (yeah!) unless we suspect something wrong.
Is there a sensible way to split these slow tests out so they only run when we specifically want them to and not every time you run ./manage.py test?
If you use default Django test runner there is no simple way of doing what you want. Maybe rearranging the test directory structure so you could call ./manage.py test path/to/directory_with/webtests or ./manage.py test path/to/directory_with_fast_tests
Another solution is using pytest Custom Markers
As Documentation states:
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
Register custom marker:
# content of pytest.ini
[pytest]
markers =
webtest: mark a test as a webtest.
Then you just run pytest -v -m webtest and only marked tests will be executed.
Straight up been looking for answers to this question for months, still no idea how to actually do it - how does one automate tests that run in the browser? Selenium tests run on the backend and of course can be automated, and we are doing this now. Recently I have been tasked with automating browser based unit tests and I have a significant knowledge gap -
I don't know how to actually do it -
For example - how do does an automated test runner collect test results and exit codes of (unit) tests that run in the browser? Can anyone explain how this actually done and the steps to accomplish this?
Is Karma the best tool to accomplish this?
You can use http://phantomjs.org/. phantomjs is a headless webbrowser, which you can see as fullstack webbrowser without gui, usable as library. Together with karma you can execute your unit tests without relying on any GUI implementations.
Here a blogpost, which explains the different components in such a secenario http://orizens.com/wp/topics/my-setup-for-testing-js-with-jasmine-karma-phantomjs-angularjs/
Means you can execute your karma unit tests on a headless linux server.
Clarification:
The need for phantomjs doesn't come from unit tests. The need for phantomjs comes from the fact that your js unit tests depend onto the browser api.
Its a good design principle to structure the code, that the coupling to the browser api is not cluttered over all the code. Try to introduce a thin layer which encapsulates the browser api dependencies. Like this you can test your js mostly without the need for phantomjs.
To execute your unit tests with phantomjs can take its time. If you have a lot of unit tests, its better to factor out the dependencies to the browser api. So you can execute more tests without the need for phantomjs and only a minority of the unit tests need to be executed with phantomjs.
You can probably use cucumber. If you have 20 test cases, that you can need to execute.
You can create a feature file which will contain all the scenarios.
The Runner classes and Method on what needs to be done can be defined in a different package. Let's say you have a scenario to
1. Open browser.
2. Enter google link.
3. Login using credentials.
Create a feature file with the above information.
Use a Cucumber runner class. And create package methods such as
#When("^Open Browser$")
public void open_Browser() throws Throwable {
WebDriver driver = new FirefoxDriver();
driver.get("www.google.com");
}
similarly you can create different methods to run. To run the jar, you can use Command Line Interface technique.
Great piece of article LINK
Basics:
This is for Python automation, you'll need to have some previous knowledge/experience.
pip install selenium
pip install nose
Above should be executed in cmd or shell...
For this test we will open the AWeber website at http://www.aweber.com
using Firefox, and make sure that the title of the page is "AWeber Email Marketing Services & Software Solutions for Small Business".
import unittest
from selenium import webdriver
class AweberTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
def test_title(self):
self.driver.get('https://www.aweber.com')
self.assertEqual(
self.driver.title,
'AWeber Email Marketing Services & Software Solutions for Small Business')
#classmethod
def tearDownClass(cls):
cls.driver.quit()
Running the test using nose:
nose aweber.py
Next test, clicking on elements:
self.driver.get('https://www.aweber.com')
order_tab = self.driver.find_element_by_css_selector('#ordertab>a')
order_tab.click()
There are many selectors in that we can use find_element_by_(css/xpath/name/id) - Locating elements
And in this case we used the method click but we can also .send_keys("asdf") ,scroll, execute java script using
browser.execute_script("alert('I canNNNNN not do javascript')")
Full code example: LINK-Pastebin
I want to speed up my tests by running only necessary migrations to build database. Currently, Django runs all of them.
I know how to specify dependencies among migrations and actively use it. I watch for dependencies, most of my tests don't depend even on Django. I want some dividends for that.
Is there any way to specify particular apps or migrations on which test depends?
I found available_apps property of TransactionTestCase, which is marked as private API, but it doesn't work for me. Even if I run single test class or test method.
I want to define model only for using in my test suite. It would be nice, to not create it's table on production. Is there any variable I can test agains to check if I'm in test mode?
If you're running your tests using the Django testing framework (python manage.py test) then it will automatically create all of the tables for your models in a completely different database, and then populate those tables from your application fixtures, prior to running your tests. Once the tests have completed, the database will be dropped. (If your production database is named foo, the test database will be named foo_test, unless you specify differently.)
If you have models that you need only for tests, then all you have to do is to place your test models in the same directory structure as your test code, instead of intermingled with your production models. That will ensure that they are not inadvertently mixed into your production database.
If you use at recent version of Django(I can confirm versions since 1.4 until 1.6), and uses django.test, you could put all your test models definitions in tests/__init__.py. This way, you will have test models in unit tests, without them polluting the production database.