I have a bash script that runs my tests:
#!/bin/bash
coverage run --source='directory_for_coverage' manage.py test
coverage report --fail-under=87
but when I run the script it only returns an error code if the coverage fails, not if one of the tests fails. I would think that because I am not using the --ignore-errors switch that coverage run should return the error code from the failing test. What am I missing?
I solved this by adding a set -e command to my script:
#!/bin/bash
set -e
coverage run --source='directory_for_coverage' manage.py test
coverage report --fail-under=87
Thank you, it worked for me !
The help set command give some details:
-e Exit immediately if a command exits with a non-zero status.
Related
I'm trying to configure a GitHub action:
My action contains the job for running the unit test by collecting code coverage. As I see in the log:
Test Run Successful.
Total tests: 336
Passed: 336
Total time: 14.0930 Seconds
Calculating coverage result...
Generating report 'TestResults/coverage.netcoreapp2.1.info'
Nonetheless, after these lines log contains an error message:
/home/runner/.nuget/packages/coverlet.msbuild/2.9.0/build/coverlet.msbuild.targets(31,5): error : Module test path not found [/home/runner/work/ObservableComputations/ObservableComputations/src/ObservableComputations.Test/ObservableComputations.Test.csproj]
The job is failed.
I tried to run
dotnet test --no-build --filter Name~Casting --verbosity normal /p:CollectCoverage=true /p:CoverletOutput=TestResults/ /p:CoverletOutputFormat=lcov
at my local machine (MS Windows) and didn't get this error.
Any help is greatly appreciated.
The reason was in -no-build parameter for dotnet test. It seems coverage collecting requires dotnet test to build itself.
I wanna make automated testing for my python project but I'm not sure about the correct way to use unittest module.
All of my test files are currently in one folder and have this format:
import unittest
class SampleTest(unittest.TestCase):
def testMethod(self):
# Assertion here
if __name__ == "__main__":
unittest.main()
Then I run
find ./tests -name "*_test.py" -exec python {} \;
When there are three test files, it outputs
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
It printed one summary for each test file. So the question is what can I do to make it print only one test summary, eg Ran 5 tests in 0.001s?
Thanks in advance
And I don't want to install any other module
You are invoking Python multiple times, and each process does not have any knowledge about rest of them. You need to run Python once and use unittest discover mechanism.
Run in shell:
python -m unittest discover
Depending on what is your project structure and naming conventions you may want to tweak discovery params, e.g. change --pattern option, as described in help:
Usage: python -m unittest discover [options]
Options:
-h, --help show this help message and exit
-v, --verbose Verbose output
-f, --failfast Stop on first fail or error
-c, --catch Catch Ctrl-C and display results so far
-b, --buffer Buffer stdout and stderr during tests
-s START, --start-directory=START
Directory to start discovery ('.' default)
-p PATTERN, --pattern=PATTERN
Pattern to match tests ('test*.py' default)
-t TOP, --top-level-directory=TOP
Top level directory of project (defaults to start
directory)
While you said I don't want to install any other module, I'd still recommend using another test runner. There are quite few out there, pytest or nose to name a few.
I have a project which has two folders on same level
/home/ishan/my_repo/jenkins_test/business_logic
/home/ishan/my_repo/jenkins_test/test_cases
test_cases has two files test_fib and test_fact
when I run nosetests --exe at /home/ishan/my_repo/jenkins_test/ it runs correctly showing
....
----------------------------------------------------------------------
Ran 4 tests in 0.036s
OK
I am trying to run these test cases so, I created following script at /home/ishan/my_repo
#!/bin/bash
source /home/ishan/venv/bin/activate
nosetests --exe /home/ishan/sf_shared/my_repo/jenkin_test/
deactivate
When I run it using
source /home/ishan/my_repo/test_runner.sh it shows correct output.
So, I tried to put it in jenksins build step. So, I added the same line
source /home/ishan/my_repo/test_runner.sh in command section of Execute Shell in jenkins.
Now, when I trigger the build using build now it fails saying
Started by user anonymous
Building in workspace /var/lib/jenkins/workspace/jenkins_test
[jenkins_test] $ /bin/sh -xe /tmp/hudson5020664150857393715.sh
+ source /home/ishan/sf_shared/test_runner.sh
/tmp/hudson5020664150857393715.sh: 2: /tmp/hudson5020664150857393715.sh: source: not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I think it doesn't even execute any test cases. It fails long before.
Can you suggest what am I doing wrong?
Maybe this will works:
/home/ishan/venv/bin/nosetests --exe /home/ishan/sf_shared/my_repo/jenkin_test/
Resolved it, issue was with the following line
source /home/ishan/venv/bin/activate
I replaced it source with standard . then it worked. So, my line became
. /home/ishan/venv/bin/activate
Following the accepted answer to Running phpunit tests using HHVM (HipHop), I attempted to run some tests:
unit-tests/ [develop] > hhvm $(which phpunit) --colors -c phpunit.xml --testsuite all .
/usr/bin/env php -d allow_url_fopen=On -d detect_unicode=Off /usr/local/Cellar/phpunit/4.3.4/libexec/phpunit-4.3.4.phar $*
It appears that this is a command to run the tests (which it does), but I'm confused about
why it's printing this command instead of just running the tests
whether executing that command even uses HHVM, since it starts with /usr/bin/env php...
Does anyone have any insight into this? Thanks so much!
What happened here is that you installed PHPUnit using homebrew, so it created a wrapper script for the actual PHAR file. That wrapper script is a Bash script that runs the PHPUnit PHAR and that script is what you're trying to get HHVM to run. Since it's not a PHP or Hack script, the Bash script is outputted directly.
Instead, you probably want to try to execute $(brew --prefix phpunit)/libexec/phpunit*.phar
e.g.: hhvm $(brew --prefix phpunit)/libexec/phpunit*.phar --colors -c phpunit.xml --testsuite all .
The wildcard is so that you don't need to specify the version of PHPUnit being using.
I am using Django unit test framework for testing my application.
When ever I am executing all the test cases I am getting very brief information about the test cases that ran successfully.
----------------------------------------------------------------------
Ran 252 tests in 8.221s
OK
This is the very little information. I wanted to have some more information about each test case,
e.g.
Time taken by each test case to execute.
successful completion of each test module.
etc etc.
Do we have any debug(or any other parameter) parameter that can enable this extended information about the test cases that got executed?
NOTE:- using verbosity parameter does not satisfy my needs
Each django command has a --help option, if you type:
python manage.py test --help you will see all the available options for the command test:
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on exception
--noinput Tells Django to NOT prompt the user for input of any
kind.
--failfast Tells Django to stop running the test suite after
first failed test.
--testrunner=TESTRUNNER
Tells Django to use specified test runner class
instead of the one specified by the TEST_RUNNER
setting.
--liveserver=LIVESERVER
Overrides the default address where the live server
(used with LiveServerTestCase) is expected to run
from. The default value is localhost:8081.
-t TOP_LEVEL, --top-level-directory=TOP_LEVEL
Top level of project for unittest discovery.
-p PATTERN, --pattern=PATTERN
The test matching pattern. Defaults to test*.py.
--version show program's version number and exit
-h, --help show this help message and exit
As you can see you can set a verbosity level by adding: -v [level]
Try for example with: python manage.py test -v 3
If you want the time for each test, plus some extra info, check out django-juno-testrunner as one option. We wanted more info out of our test runs, so we built it in.
Note that it's Django 1.6+ only at the moment