One summary for multiple test files using python unittest - python-2.7

I wanna make automated testing for my python project but I'm not sure about the correct way to use unittest module.
All of my test files are currently in one folder and have this format:
import unittest
class SampleTest(unittest.TestCase):
def testMethod(self):
# Assertion here
if __name__ == "__main__":
unittest.main()
Then I run
find ./tests -name "*_test.py" -exec python {} \;
When there are three test files, it outputs
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
It printed one summary for each test file. So the question is what can I do to make it print only one test summary, eg Ran 5 tests in 0.001s?
Thanks in advance
And I don't want to install any other module

You are invoking Python multiple times, and each process does not have any knowledge about rest of them. You need to run Python once and use unittest discover mechanism.
Run in shell:
python -m unittest discover
Depending on what is your project structure and naming conventions you may want to tweak discovery params, e.g. change --pattern option, as described in help:
Usage: python -m unittest discover [options]
Options:
-h, --help show this help message and exit
-v, --verbose Verbose output
-f, --failfast Stop on first fail or error
-c, --catch Catch Ctrl-C and display results so far
-b, --buffer Buffer stdout and stderr during tests
-s START, --start-directory=START
Directory to start discovery ('.' default)
-p PATTERN, --pattern=PATTERN
Pattern to match tests ('test*.py' default)
-t TOP, --top-level-directory=TOP
Top level directory of project (defaults to start
directory)
While you said I don't want to install any other module, I'd still recommend using another test runner. There are quite few out there, pytest or nose to name a few.

Related

Installed go with hombrew, can find $GOROOT causing package failures

I installed Go with homebrew and it usually works. Following the tutorial here on creating serverless api in Go. When I try to run the unit tests, I get the following error:
# _/Users/pro/Documents/Code/Go/ServerLess
main_test.go:6:2: cannot find package "github.com/strechr/testify/assert" in any of:
/usr/local/Cellar/go/1.9.2/libexec/src/github.com/strechr/testify/assert (from $GOROOT)
/Users/pro/go/src/github.com/strechr/testify/assert (from $GOPATH)
FAIL _/Users/pro/Documents/Code/Go/ServerLess [setup failed]
Pros-MBP:ServerLess Santi$ echo $GOROOT
I have installed the test library with : go get github.com/stretchr/testify
I would appreciate it if anyone could point me in the right direction.
Also confusing is when I run echo $GOPATH it doesnt return anything. same goes for echo $GOROOT
Some things to try/verify:
As JimB notes, starting with Go 1.8 the GOPATH env var is now optional and has default values: https://rakyll.org/default-gopath/
While you don't need to set it, the directory does need to have the Go workspace structure: https://golang.org/doc/code.html#Workspaces
Once that is created, create your source file in something like: $GOPATH/src/github.com/DataKid/sample/main.go
cd into that directory, and re-run the go get commands:
go get -u -v github.com/stretchr/testify
go get -u -v github.com/aws/aws-lambda-go/lambda
Then try running the test command again: go test -v
The -v option is for verbose output, the -u option ensures you download the latest package versions (https://golang.org/cmd/go/#hdr-Download_and_install_packages_and_dependencies).

What Django TEST_RUNNER supports xunit xml and logging capture?

I'm attempting to set up a new django project, and I've configured TEST_RUNNER in settings.py to be django_nose.NoseTestSuiteRunner.
I chose this test runner because it seems to be the only one I can find that has the following features:
writes xunit xml test report
captures logging/stdout and only displays for failed tests.
However I've heard that nose is unmaintained and I'm having a hard time finding a suitable replacement. The standard test runner doesn't capture logging nor writes xunit as far as I'm able to tell (would love to be proven wrong!)
I run tests like so:
python -m coverage run manage.py test --noinput
python -m coverage report --include="app/*" --show-missing --fail-under=100
python -m coverage xml --include="app/*" -o ./reports/coverage.xml
With this in settings.py:
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
And this setup.cfg:
[nosetests]
verbosity=0
with-xunit=1
xunit-file=./reports/xunit.xml
logging-clear-handlers=1
The last two lines are the real juicy bits I can't seem to find in other test runners. nose captures the logging and clears other logging handlers (eg, the handler that dumps on stdout) so the test runs output is much cleaner (you only see logging for tests that failed).
In other non-django projects I typically use nose2 but django-nose2 project appears to be 6 years old and lacking python3 support??
Please let me know which test runner is the "recommended" one (eg, most popular) with django support, thanks.
I have had success with unittest-xml-reporting:
TEST_RUNNER = 'xmlrunner.extra.djangotestrunner.XMLTestRunner'
https://github.com/xmlrunner/unittest-xml-reporting#django-support
The output directory can be configured with the TEST_OUTPUT_DIR setting.
You may still use nose runner:
INSTALLED_APPS += ['django_nose']
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
NOSE_ARGS = [
'--with-xunit',
'--xunit-file=nosetests.xml',
'--with-coverage',
'--cover-erase',
'--cover-xml',
'--cover-xml-file=nosecover.xml',
]
So pytest produces some very nice test output. I've unset TEST_RUNNER in settings.py and changed my test script to:
python -m coverage run -m pytest --junitxml=./reports/junit.xml
python -m coverage report --include="app/*" --show-missing --fail-under=100
python -m coverage xml --include="app/*" -o ./reports/coverage.xml
This works, and captures ALL logging output (nose was a little buggy and let one or two logging statements slip through, very strange behavior).
The only thing is that I'm a django novice so I don't know if there are any bad side-effects of not using manage.py test for testing django. Any guidance is appreciated, thanks!

I want to port a module from Python 2 to 3, how do I run tests?

I've been writing in Python 3 for a while, I came across this library that I really need:
https://github.com/Yelp/python-gearman
but I want to try to port it to Python 3. But I don't know how one runs the tests in a Python module. I tried python -m unittest discover but it didn't discover any tests. And once I actually do change something to Python 3, how would I test it? Is the testing mechanism the same in Python 3 as in 2?
First figure out how to run the tests in 2.7, assuming that the test directory is installed along with the gearman directory. (It should be if you have git and clone the github repository, but I have not used git, just hg.) In the test/ directory, add and run a shell/console script like the following. (You did not specify OS.)
python -m admin_client_tests
python -m client_tests
python -m protocol_tests
python -m worker_tests
where python invokes 2.7 on your system. Each of these modules imports _core_testing.py, which should not be run directly. There should be a way to run all tests at once, and it should be documented, but the authors may not expect users to run them. Anyway, run 2to3 to produce a new package directory, look at messages, change python to run 3.x, and test. Process difficulty may be anything from 'no-brainer' to 'give-it-up'. (I did one conversion that was about a 1 or 2 on 0 to 10 difficulty scale.) If successful, asks authors if they want to either make code 2 and 3 compatible or have a separate 3.x version.
You should specify the pattern explicitly (the default: test*.py doesn't work in this case):
$ python -m unittest discover -p \*_tests.py
.............................................................
----------------------------------------------------------------------
Ran 61 tests in 0.040s
OK
To run individual test files:
$ python -m tests.client_tests

Getting extended information on Django unittest framework

I am using Django unit test framework for testing my application.
When ever I am executing all the test cases I am getting very brief information about the test cases that ran successfully.
----------------------------------------------------------------------
Ran 252 tests in 8.221s
OK
This is the very little information. I wanted to have some more information about each test case,
e.g.
Time taken by each test case to execute.
successful completion of each test module.
etc etc.
Do we have any debug(or any other parameter) parameter that can enable this extended information about the test cases that got executed?
NOTE:- using verbosity parameter does not satisfy my needs
Each django command has a --help option, if you type:
python manage.py test --help you will see all the available options for the command test:
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on exception
--noinput Tells Django to NOT prompt the user for input of any
kind.
--failfast Tells Django to stop running the test suite after
first failed test.
--testrunner=TESTRUNNER
Tells Django to use specified test runner class
instead of the one specified by the TEST_RUNNER
setting.
--liveserver=LIVESERVER
Overrides the default address where the live server
(used with LiveServerTestCase) is expected to run
from. The default value is localhost:8081.
-t TOP_LEVEL, --top-level-directory=TOP_LEVEL
Top level of project for unittest discovery.
-p PATTERN, --pattern=PATTERN
The test matching pattern. Defaults to test*.py.
--version show program's version number and exit
-h, --help show this help message and exit
As you can see you can set a verbosity level by adding: -v [level]
Try for example with: python manage.py test -v 3
If you want the time for each test, plus some extra info, check out django-juno-testrunner as one option. We wanted more info out of our test runs, so we built it in.
Note that it's Django 1.6+ only at the moment

How to use pdb.set_trace() in a Django unittest?

I want to debug a Django TestCase just like I would any other Python code: Simply call pdb.set_trace() and then drop into an interactive session. When I do that, I don't see anything, as the tests are run in a different process. I'm using django-discover-runner, but my guess is that this applies to the default Django test runner.
The question:
Is it possible to drop into a pdb session while using django-discover-runner a) on every error / fail, AND/OR b) only when I call pdb.set_trace() in my test code?
Some research:
This answer explains that Django creates another process, and suggests using a call to rpdb2 debugger, a part of winpdb, but I don't want to use winpdb, I'd rather use ipdb.
This answer solves the problem for django-nose by running the test command like this: ./manage.py test -- -s, but that option's not available for django-discover-runner.
This answer shows that I can do this with ipython:
In [9]: %pdb
Automatic pdb calling has been turned ON
That seems like a potential option, but it seems a bit cumbersome to fire up ipython every time I run tests.
Finally, this answer shows that nose comes with a --pdb flag that drops into pdb on errors, which is what I want. Is my only option to switch to the django-nose test runner?
I don't see any options for this in the built-in help for django-discover-runner:
$ python manage.py help test --settings=settings.test
Usage: manage.py test [options] [appname ...]
Runs the test suite for the specified applications, or the entire site if no apps are specified.
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Print traceback on exception
--noinput Tells Django to NOT prompt the user for input of any
kind.
--failfast Tells Django to stop running the test suite after
first failed test.
--testrunner=TESTRUNNER
Tells Django to use specified test runner class
instead of the one specified by the TEST_RUNNER
setting.
--liveserver=LIVESERVER
Overrides the default address where the live server
(used with LiveServerTestCase) is expected to run
from. The default value is localhost:8081.
-t TOP_LEVEL, --top-level-directory=TOP_LEVEL
Top level of project for unittest discovery.
-p PATTERN, --pattern=PATTERN
The test matching pattern. Defaults to test*.py.
--version show program's version number and exit
-h, --help show this help message and exit
Django does not run tests in a separate process; the linked answer claiming it does is simply wrong. (The closest is the LiveServerTestCase for Selenium tests, which starts up a separate thread to run the development server, but this is still not a separate process, and it doesn't prevent use of pdb). You should be able to insert import pdb; pdb.set_trace() anywhere in a test (or in the tested code) and get a usable pdb prompt. I've never had trouble with this, and I just verified it again in a fresh project with Django 1.5.1 and django-discover-runner 1.0. If this isn't working for you, it's due to something else in your project, not due to Django or django-discover-runner.
Nose captures all output by default, which breaks import pdb; pdb.set_trace(). The -s option turns off output capturing. This is not necessary with the stock Django test runner or django-discover-runner, since neither of them do output-capturing to begin with.
I don't know of any equivalent to nose's --pdb option if you're using django-discover-runner. There is a django-pdb project that provides this, but a quick perusal of its code suggests to me that it wouldn't play well with django-discover-runner; its code might give you some clues towards implementing this yourself, though.
FWIW, personally I use py.test with pytest-django rather than django-discover-runner or django-nose. And even though py.test provides a --pdb option like nose, I don't use it; I often want to break earlier than the actual point of error in order to step through execution prior to the error, so I usually just insert import pytest; pytest.set_trace() (importing set_trace from pytest does the equivalent of nose's -s option; it turns off py.test's output capturing before running pdb) where I want it in the code and then remove it when I'm done. I don't find this onerous; YMMV.
Try to use ipdb instead of pdb -
import ipdb;ipdb.set_trace()
or (works in case of nose test runner)
from nose.tools import set_trace;set_trace()