I am writing an app that records time of events. For unit testing I would usually use a monkey-patch to replace datetime.time with a fake so I can test it properly. I am trying to do end-to-end tests with Selenium, with the test cases in a separate program, not using python manage.py test. Therefore I can't do a patch. I did try using manage.py but it did not seem to help.
I'm sure this is a solved problem. How should I be doing it? Is Selenium just not the right tool for this sort of testing? Am I missing how to get the test case to talk to the application?
Selenium talks to a full webserver and has no access to the python interpreter running inside that webserver. Even if you are scripting SeleniumRC with python, the script instance of the interpreter is separate from the webserver instance.
If you are running the test webserver via manage.py runserver, you could write your own management command to replace 'runserver' with a version that patches datetime.time. This won't be easy, so you may consider either revising your Selenium-driven tests to cope with events happening in realtime, or convert you time-sensitive tests to django client tests so you can use the mock library.
Related
What is the right way of running the unit test cases for django as part of build process ?
We use Jenkins pipeline for building the docker image and container will be started by using a startup script.
Do i need to call the manage.py test before the nginx container was started ?
or
Do i need to add a post build task in the Jenkins build pipeline to run the tests after the container was started ?
So basically looking for best practice to run tests.. Do we need to run unit tests before the server has started or after the server has started ?
I know that it makes more sense to run unit tests before we start the nginx, but won't it create an increased time with more n more test cases being added in future?
Depends on your test cases. If you are running unit tests only you don't need to. If you are doing something more in your tests like for example calling your apis (functional testing, etc) a good approach (in my opinion) is to create different stages in your jenkinsfile where you first build the docker image, then run the unit tests, then decide what to do depending on the test results. I see this as a good thing because you will be running tests over your app inside the same container (same conditions) it will be running in a production environment. Another good practice would be to add some plugins to Jenkins and have some reports (i.e. coverage).
I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.
So I have got my app up and running. However it still runs off the cmd console at the moment. Next steps is for me to build a simple web app interface.
After much research, rather than to setup an entire flask site from scratch. I decided to use cookiecutter-flask from https://github.com/konstantint/cookiecutter-flask boilerplate to quickly get the boilerplate up and running.
Everything looks good in a sense where I understand:
Templating
App function
Static
I still cannot figure out how to get the user registration function working. I keep getting a wsgi error. I know is somewhat related to my database not being installed.
Not specific to that, what I am really looking for is a walk through tutorial on how to get it working by bare minimum and then enhance from there.
I have been looking around for tutorials and walk through but to no avail.
Appreciate any help out there.
After clone you have to do these steps ... These will create the database tables for you...
python manage.py db init
python manage.py db migrate
python manage.py db upgrade
python manage.py server
I'm working on a django package created using cookiecutter from this repository. It uses nose tests, with a script (runtests.py) that generates settings on the fly. This works brilliantly. I'd like to fully integrate these tests into PyCharm's test runner. I can point the Nose test plugin at this script, and it executes and gets the correct test feedback, but in a way that it can't usefully identify which tests are failing when a test fails. My project is on github here.
Any hints on how to integrate the two nicely?
I am using django framework for my project and now I in order to move to continous integration I am planning to use jenkins. natually django-jenkins is the choice.
I am using django unit test framework for unit testing and using patterns finding for testcases discovery.
./manage.py test --patterns="*_test.py"
I have installed and configured django-jenkins and all other necessary modules. Now when I am running the jenking for running the unit test cases, jenkins is not able to discover the test cases.
./manage.py jenkins
Is there some syntax to be followed while naming the unit test files or unit test cases itself?
I also could not find any pattern searching parameter to be used with jenkins.
All options from standard django test runner should works (https://github.com/kmmbvnr/django-jenkins/pull/207) but i'd newer tested them all.