Generate python unit test document - unit-testing

My bosses want a list of all the unit tests with their descriptions. Since this will change frequently I'd like to find a way to generate it instead of trying to manually keep it up to date. I am using python for this project. Is there some way to make doxygen or some other tool do this?

Related

What is the equivalent of autotest/guard for django

When I code in Ruby on Rails, I rely on Guard to listen for changes to the code base so when I'm writing tests, I don't need to manually run the tests in the file I'm working on each time.
https://github.com/guard/guard-rspec
What is the closest thing to thing for django so I can enjoy the same workflow?
Specifically, what I want to do is be able to have tests run, based on:
what run tests based on files I have changed, and not
know whether to run the test command based on whether a test run is currently taking place
work with existing tests written with unittest
work with something like factory boy to let me use factories instead of fixtures
I've used nose before, and pytest and I'm comfortable using both - but I haven't used many of pytests extensive set of libraries.
What are my options for this?

How to list available test classes

Is there a way to get Gradle (1.12) to list all of the available unit test classes in a project?
I'm considering putting a front-end on a series of tests we use in my company, and since new tests are always being added, I need a way to get a list of available tests.
I realize that I could scan the actual project for classes that reside in the test sources tree, but I was hoping for something easily parsed from Gradle. I just don't know if that's really an option and I'm having trouble getting decent search results since "test" is such a generic word.
Any help would be appreciated.
There is no official API in Gradle to expose this information. You can check if ClassScanner.java is what you need. Either look at Gradle sources or it is also used in EclipseTestExecuter.java Keep in mind that it is an implementation detail.
A simpler approach is to run these tests and enable logging where you will print names of executed tests. There is an example how to this in Gradle documentation I think.

parsing django test results

I recently wrote some tests for one of my django Projects. What I now want to do is to call the test command from a script.
I am looking to parse the test results and save them. Is that at all possible for django testing framework?
The easiest way would be to use a standard test output format, such as JUnit XML, for which there are already libraries. Right now, I'm using django-jenkins, which provides a nice output that I can view in our CI tool.
If you'd like to roll your own solution, I'd reccomend coding your own Test Runner, and customizing the suite_result method.

Generate new code coverage for a single file without clearing all other coverage reports in PHPUnit

First the question:
In PHPUnit 3.5, is there a way to generate a coverage report for a single test without the report for the entire test suite being overwritten. I.e. only update the coverage report for the affected files? I still want the output to go to the same folder.
For those that want a bit of background:
Working with PHPUnit 3.5, I have a project which retroactively needs to be covered with unit tests. Now in order to know which classes still need tests I run the entire test suite and generate a html coverage report on it. Because running the complete suite takes some time, I would like to avoid having to run it every time I want to check which tests still need to be implemented. But at the same time I also want the coverage report for the unit test that I'm currently working on, so that I can make sure I'm executing each line of code in a class (this of course is very fast back and forth, so it makes no sense to run the entire suite just to generate this report). I can generate the report for a single test, and I can generate it for the entire suite. But what I'm looking for is a hybrid, which would allow me to first generate a report for the entire suite, and then just update the report with coverage information for the test I'm currently working on.
I've set up a ruby script which will simply run the test for the current file I'm working on and generate a coverage report on that file. But working like that, it always resets the coverage report for all other files also, even if the test did not execute anything in those classes.
Any ideas?
This isn't possible natively, but if you can figure out how to regenerate the HTML from the XML coverage data files you could modify your script to
Copy the coverage XML for a full run to a staging area.
When running a single test, copy over the new XML files to that area. This will necessarily not merge coverage from two tests that cover the same class, but I'm guessing from your description that you're covering a single class from each test and vice versa.
Rebuild the HTML from the XML. You might be able to figure out how to do this by looking at the source, but I doubt it's possible natively either.

Best Approach to verify public methods of a class have unit tests?

I have tried searching around the web and on SO, but haven't seen much discussion about t (or maybe I'm just not using the right keywords).
What I would like to do is write a script (or use a utility that already exists) to verify that a class or set of classes has unit tests written for them in the test project.
I've got a release coming up, and I want to make sure that all public methods of my business layer have unit tests. I'm trying to get everyone on board with TDD, but it hasn't happened yet.
I've got a pretty basic idea of how I would write a script to check this (open file, parse method signatures into some list, open corresponding test class file and check that each method in the list exists somewhere in the test file), but I wanted to see what other options are available.
.Net code coverage tools, such as NCover & dotCover, already exist. I would use one of those and read their reports.