I am using django framework for my project and now I in order to move to continous integration I am planning to use jenkins. natually django-jenkins is the choice.
I am using django unit test framework for unit testing and using patterns finding for testcases discovery.
./manage.py test --patterns="*_test.py"
I have installed and configured django-jenkins and all other necessary modules. Now when I am running the jenking for running the unit test cases, jenkins is not able to discover the test cases.
./manage.py jenkins
Is there some syntax to be followed while naming the unit test files or unit test cases itself?
I also could not find any pattern searching parameter to be used with jenkins.
All options from standard django test runner should works (https://github.com/kmmbvnr/django-jenkins/pull/207) but i'd newer tested them all.
Related
What is the right way of running the unit test cases for django as part of build process ?
We use Jenkins pipeline for building the docker image and container will be started by using a startup script.
Do i need to call the manage.py test before the nginx container was started ?
or
Do i need to add a post build task in the Jenkins build pipeline to run the tests after the container was started ?
So basically looking for best practice to run tests.. Do we need to run unit tests before the server has started or after the server has started ?
I know that it makes more sense to run unit tests before we start the nginx, but won't it create an increased time with more n more test cases being added in future?
Depends on your test cases. If you are running unit tests only you don't need to. If you are doing something more in your tests like for example calling your apis (functional testing, etc) a good approach (in my opinion) is to create different stages in your jenkinsfile where you first build the docker image, then run the unit tests, then decide what to do depending on the test results. I see this as a good thing because you will be running tests over your app inside the same container (same conditions) it will be running in a production environment. Another good practice would be to add some plugins to Jenkins and have some reports (i.e. coverage).
I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.
How can I run django unit test in Microsoft TFS automatically?
By default the Python-Django test feature is not integrated in TFS.
However vNext build (TFS 2015 and later) have more flexible steps, so you can use the Command line task or Utility: Batch script task to run pytest or django test. That means if you can run the Django unit test in command line, then you can run the tests in TFS.
Please reference this article to do that: Running Python Unit Test in a VSTS/TFS build
Other articles for your reference to run the Django unit test:
Writing and running tests
DJANGO UNIT TEST CASES WITH FORMS AND VIEWS
Setting a Full Testing Framework for Django (and more!)
Besides, you can also have a try for this extension : Python Test
I'm working on a django package created using cookiecutter from this repository. It uses nose tests, with a script (runtests.py) that generates settings on the fly. This works brilliantly. I'd like to fully integrate these tests into PyCharm's test runner. I can point the Nose test plugin at this script, and it executes and gets the correct test feedback, but in a way that it can't usefully identify which tests are failing when a test fails. My project is on github here.
Any hints on how to integrate the two nicely?
I have a project that I am usually debugging/running using the grails run-app command. I would like to run a test but without having the server run again only for the specific test.
I usually run the server in debug mode all the time in the background.
I've tried playing around with the run configurations in iteli-j, with latest try being grails test-app functional: className
In Grails 2.4.4 if you override the baseUrl you can run your tests against a server other than localhost. For example, we have a pre-production server hosted on AWS and we run a subset of our functional tests against it from Jenkins, post-deploy, as a smoke test.
grails -plain-output test-app -baseUrl=https://foo.bar.org/ -echoOut -echoErr functional:
That works, but test-app still starts the embedded tomcat server. However, with a bit of digging, I found that overriding the server host to point to a running instance will cause the tests to run without starting the embedded tomcat. There are a couple of ways to accomplish this:
Pass the value on the command line:
grails -plain-output test-app -Dgrails.server.host=foo.bar.org -baseUrl=https://foo.bar.org/ -echoOut -echoErr functional:
Or, overriding the value in Config.groovy for the specific environment should also work:
...
preProd {
...
grails.server.host = 'foo.bar.org'
...
}
...
It isn't documented under test-app, but it is mentioned under run-app and it turns out it works for test-app too.
This works because Grails determines if the embedded server should be started by trying to open a connection to the server host/port, and if successful skips the startup.
From trial and error I have discovered that I have the best results when I specify both grails.server.host and -baseUrl even though the base url becomes redundant information. Possibly this is because my case involves an ssl connection, but I tried running with -https instead of -baseUrl=... and the tests just hung.
This plugin maybe helps you.
Grails functional-test-development Plugin
This plugin aims to make developing functional tests for Grails more convenient by facilitating running your tests against an already running application. It utilises the improved functional testing support added in Grails 1.3.5 and does not work with earlier versions.