Nose tests for django app in pycharm - django

I'm working on a django package created using cookiecutter from this repository. It uses nose tests, with a script (runtests.py) that generates settings on the fly. This works brilliantly. I'd like to fully integrate these tests into PyCharm's test runner. I can point the Nose test plugin at this script, and it executes and gets the correct test feedback, but in a way that it can't usefully identify which tests are failing when a test fails. My project is on github here.
Any hints on how to integrate the two nicely?

Related

Can/should I run Jest alongside webpack?

I am pretty new to unit testing, and I am trying to understand best practice for whether and, if so, how to run Jest alongside webpack.
For context, I am used to using ESLint in the following way. I use eslint-webpack-plugin and configure it so that webpack outputs an error and/or fails the build if there is a linting error. I use this setup for both the development build (using webpack-dev-server) and the production build so that I can be made aware of and address linting issues as they arise. I also use lint-staged and husky to set up a pre-commit hook that runs ESLint before commits for a similar reason.
So, my inclination when learning Jest was to use a similar setup, where tests will be run as part of the webpack compilation process and errors will be obvious and intrusive so that I can address/resolve them as they arise. I tried following the tutorials for Babel and webpack on the Jest site, but I cannot get webpack to throw any errors, and I'm not even sure if it's even running Jest at all to be honest. I looked to see how create-react-app and create-next-app have Jest set up. They both include an npm script for testing, but it seems users are supposed to run that script manually, separately from the dev/build processes, or as part of a CI workflow.
Any advice appreciated!

How can I run Django unit test in Microsoft TFS automatically?

How can I run django unit test in Microsoft TFS automatically?
By default the Python-Django test feature is not integrated in TFS.
However vNext build (TFS 2015 and later) have more flexible steps, so you can use the Command line task or Utility: Batch script task to run pytest or django test. That means if you can run the Django unit test in command line, then you can run the tests in TFS.
Please reference this article to do that: Running Python Unit Test in a VSTS/TFS build
Other articles for your reference to run the Django unit test:
Writing and running tests
DJANGO UNIT TEST CASES WITH FORMS AND VIEWS
Setting a Full Testing Framework for Django (and more!)
Besides, you can also have a try for this extension : Python Test

PyCharm coverage only measures test directory for a GAE app with Selenium tests

I'm developing a GAE app in Python using the PyCharm IDE. I have developed an initial set of Selenium WebDriver unit tests, which are in directory "test", with "test" at the top level of the project directory. Note that my tests do not directly call the GAE modules. They only utilize Selenium, which in turn runs the app thru a browser.
When I use menu option "Run 'Unittests in test' with coverage", I get a report showing coverage within directory "test", and also the colors show up in the left margin for the Python modules in "test". The information looks accurate. Very cool!
However, the rest of the files in my project show "not covered", even though the tests exercise a lot of that code via Selenium remote control of a browser.
In settings, I've tried checking "Use bundled coverage.py" and, after running sudo pip install coverage, I've also tried unchecking "Use bundled coverage.py". My results are the same either way.
Is it possible that you simply cannot obtain coverage of a GAE app being exercised via Selenium?

Test cases discovery using django-jenkins

I am using django framework for my project and now I in order to move to continous integration I am planning to use jenkins. natually django-jenkins is the choice.
I am using django unit test framework for unit testing and using patterns finding for testcases discovery.
./manage.py test --patterns="*_test.py"
I have installed and configured django-jenkins and all other necessary modules. Now when I am running the jenking for running the unit test cases, jenkins is not able to discover the test cases.
./manage.py jenkins
Is there some syntax to be followed while naming the unit test files or unit test cases itself?
I also could not find any pattern searching parameter to be used with jenkins.
All options from standard django test runner should works (https://github.com/kmmbvnr/django-jenkins/pull/207) but i'd newer tested them all.

How to test time-related functions in Django?

I am writing an app that records time of events. For unit testing I would usually use a monkey-patch to replace datetime.time with a fake so I can test it properly. I am trying to do end-to-end tests with Selenium, with the test cases in a separate program, not using python manage.py test. Therefore I can't do a patch. I did try using manage.py but it did not seem to help.
I'm sure this is a solved problem. How should I be doing it? Is Selenium just not the right tool for this sort of testing? Am I missing how to get the test case to talk to the application?
Selenium talks to a full webserver and has no access to the python interpreter running inside that webserver. Even if you are scripting SeleniumRC with python, the script instance of the interpreter is separate from the webserver instance.
If you are running the test webserver via manage.py runserver, you could write your own management command to replace 'runserver' with a version that patches datetime.time. This won't be easy, so you may consider either revising your Selenium-driven tests to cope with events happening in realtime, or convert you time-sensitive tests to django client tests so you can use the mock library.