Django & coverage - .coveragerc omit not working - django

I have a Django project with multiple applications. In one application, I have a library already tested using unittest + coverage.
When I run the test of the django project, I would like to omit this folder.
My project architecture is :
project/
├── application1/
│ ├── tests.py
│ ├── views.py
│ ├── ...
│ └── my_lib/ << The lib I want to omit
│ ├── tests.py
│ ├── script.py
│ ├── __init__.py
│ └── ...
├── application2/
│ ├── tests.py
│ ├── views.py
│ └── ...
├── application3/
│ ├── tests.py
│ ├── views.py
│ └── ...
├── .coveragerc
├── runtest.bat
├── manage.py
└── ...
runtest.bat is :
CALL activate ./venv
coverage erase
coverage run manage.py test
coverage html
PAUSE
Based on several tutorials and SO questions, I've tried several .coveragerc files but none of them properly skipped the library. Testing it creates a fail because it tries to load with the incorrect relative path.
.coveragerc is :
[run]
omit =
*/application1/my_lib/*
[report]
exclude_lines =
if __name__ == .__main__.:
show_missing = True
I also tried :
[run]
source = .
omit =
/application1/my_lib/*
[run]
source = project/*
omit =
*/application1/my_lib/*
[run]
source = .
omit =
*/application1/my_lib/*
Do you have any clue what am I doing wrong ?
For information :
Django version is 2.2.5
Coverage version is 4.5.4
Python version is 3.7
Few sources :
Django Coverage
Coverage Documentation
SO question
Thanks in advance
EDIT 1:
Just as a background. my_lib is mainly a file containing a class. This code is just embedded in the application to be used "like" a standard library.
In application1/views.py, I simply have a from . import my_lib.
In application1/my_lib/__init__.py, I have from .script import MyClass
The objective is simply to be able then to use in my view with my_lib.MyClass.do_something()
Now the reason why I would like to exclude this "library" from the coverage is because this was developped out of the application. It has his own unittest in applications1/my_lib/tests/py starting with from script import MyClass.
When I run the coverage in the root of the project, Python cannot find script.py in the root of the project so it triggers the error
File "path_to/project/application1/my_lib/tests.py", line 3
from script import MyClass
ModuleNotFoundError: No module named script.
In the worst case scenario, I could put this library to site-packages of my virtual environment bu I would like to avoid it (there will be probably multiple similar my_lib with at most 1 per application)
EDIT 2:
As a temporary solution, I simply renamed application1/my_lib/tests.py by application1/my_lib/script_tests.py. There is no file starting by test so the coverage does not care about this folder anymore.
The other alternative to run the test at the same time as the project required quite a lot of updates due to relative path used to several files

Related

How do I run unit tests from smaller applications inside of a parent directory?

I'm working in a repository with multiple smaller applications for various lambdas. I'd like to be able to run cargo test from the top level directory, but I can't seem to find a way to get this to work since the files aren't nested within a top level src directory.
├── cloudformation
├── apps
│ ├── app1
│ │ └── src
│ └── app2
│ └── src
└── otherStuff
Ideally I could run cargo test from the top level and it would dig into apps and run tests from the src directory nested within each individual app. Is there a way to accomplish this?

How to generate coverage for multiple packages using go test in custom folders?

We have following project structure:
├── Makefile
├── ...
├── src
│   ├── app
│   │   ├── main.go
│ │ ├── models
│ │ ├── ...
│ │ └── dao.go
│   │   ├── ...
│   │   └── controllers
│ │ ├── ...
│ │ └── pingController.go
│   └── test
│   ├── all_test.go
│   ├── ...
│   └── controllers_test.go
└── vendor
└── src
├── github.com
├── golang.org
└── gopkg.in
I want to measure coverage of packages in src/app by tests in src/test. And currently generating coverage profile by running custom script that runs coverage for each package in app and then merges all coverage profiles into one file. Recently I heard that in go1.10 we are able to generate coverage for multiple packages.
So I tried to replace that script with oneliner, and tried running
GOPATH=${PROJECT_DIR}:${PROJECT_DIR}/vendor go test -covermode count -coverprofile cover.out -coverpkg all ./src/test/...
It gives me "ok test 0.475s coverage: 0.0% of statements in all"
When I do
cd src/test/
GOPATH=${PROJECT_DIR}:${PROJECT_DIR}/vendor go test -covermode count -coverprofile cover.out -coverpkg all
Logs show that specs are runned and tests are successfull, but still I have "coverage: 0.0% of statements in all" and empty cover.out.
What am I missing to properly compute coverage of packages in app by tests in test?
You can't with the current state of go test but you can always use third party scripts.
https://github.com/grosser/go-testcov
Short answer:
go test -race -coverprofile=coverage.txt -covermode=atomic ./... # Run all the tests with the race detector enabled
go test -bench=. -benchmem ./... # Run all the benchmark for 3s and print memory information
In order to create a test for a Go code, you have to create a file (in the same folder of the root code) that have the same name of the code, and append "_test" to the name. The package have to be the same too.
So, if I have a GO Code called strings.go, the relative test suite have to be named: strings_test.go.
After that, you have to create a method that have in input the t *testing.T
struct, and the name of the method have to start with Test or Benchmark word.
So, if the strings.go contains a method called "IsUpper", the related test-case is a method called TestIsUpper(t *testing.T).
If you need the Benchmark, than you need to substitute the Test word with Benchmark, so the name of the method will be BenchmarkIsUpper, and the struct that the method take in input is b *testing.B.
You can have a look at the following link in order to see the tree-structure necessary for execute the test in GO: https://github.com/alessiosavi/GoGPUtils.
There you can find Benchmark and TestCase.
Here an example of tree-struct
├── string
│ ├── stringutils.go
│ └── stringutils_test.go

How to organize test resource files when using waf

I am using Google Tests in a project with waf as a build system. I want to know an effective way of dealing with resource files.
For a directory structure like the following:
MyProject
├── build
├── src
| ├──do_something.cpp
| ├──do_something.h
├── test
| ├── unit_test.cpp
│ ├── resources
│ │ ├── input1.txt
│ │ ├── input2.txt
├── wscript
After building, to run the tests from the build directory, I would need the resource files to be copied across. The build directory would look like:
MyProject
├── build
| ├── test
│ │ ├── resources
│ │ | ├── input1.txt
│ │ | ├── input2.txt
│ │ ├── unit_test
To achieve this, my current wscript is:
def options(opt):
opt.load('compiler_cxx')
def configure(conf):
conf.load('compiler_cxx')
def build(bld):
bld.stlib(source='src/do_something.cpp',
target='mylib',
includes='src')
bld.exec_command("cp -r test/resources build/test")
bld.program(source='test/unit_test.cpp',
includes='src',
target='test/unit_test',
use='mylib')
Using the bld.exec_command like this seems hacky. What is a better way? How are other people organizing their test resources with waf?
I am using waf 1.9.5.
The easiest way is of course to change your program so that it can read resources from an arbitrary location. Why litter the file system with multiple copies of the same files?
That said, you can easily copy a directory tree recursively using:
for f in ctx.path.ant_glob('tests/resources/**/*'):
ctx(features = 'subst', source = f.srcpath(),
target = f.srcpath())

How to use models between multiple apps?

I'm building a Django app and I was asked split apart models.py and put the resulting models in a 'models' folder. I did that and it works fine.
I was then asked to move the models folder to the Project level so that the models can be used by other apps.
The file structure would look like this (taken from the Django tutorial):
mysite
mysite
polls
db.sqlite3
manage.py
models <- Have the poll app's models in there?
However, everything I have read suggests this is heavily frowned upon and that models should live at the app level, not the project level.
So, what is the best way to handle this is Django -- to let models be used by multiple applications? Some of the things I have read suggest importing models between apps or changing the models Meta class db_table option.
A majority of the posts I have found are from 5+ years ago though, so I'm not sure what the best current approach is.
If I import models to a new app, and then /do something with them, do they point to the same database as the app they were imported from or is a new database created for the new app?
Thanks for the help. Very appreciated.
I was wondering the same thing and saw you got a decent answer from reddit, but thought I'd throw my 2 cents in.
You could refactor the models into individual apps.
tl;dr
Some assumptions about your situation
At first, the models were split out from models.py into individual python files to improve maintainability
The models were moved up to the project level to minimize impact to the models by changes in the applications (or vice versa)
So for an example:
mysite
├── polls
│ ├── __init__.py
│ ├── admin.py
│ ├── apps.py
│ ├── migrations
│ │ └── __init__.py
│ ├── models
│ │ └── __init__.py
│ │ └── foo.py
│ │ └── bar.py
│ ├── tests.py
│ └── views.py
├── db.sqlite3
├── manage.py
└── mysite
├── __init__.py
├── settings.py
├── urls.py
└── wsgi.py
We have 2 models (foo and bar) that we separated for some logical reason.
IMHO, I think that overall maintainability could benefit by putting the foo and bar models in their own apps IF they satisfy 2 criteria
They are categorically different
They are used by multiple applications
At this point, the models can be imported by other apps in your site just as before, but the layout looks more like the Django best practices
New Layout:
mysite
├── polls
├── foo
│ └── models.py
│ └── etc...
├── bar
│ └── models.py
│ └── etc...
├── etc...
It's a good question, when you start programming in Django they tell you (Django project team) that an application is like a module, but they do not tell you how to do it.
You can do it as Rohan says in the comment to your question: "You can refer models from one app into another using 'appname.model_name' in the relationship fields.", but you must keep in mind that the concept of application in Django is thought to make these modules reusable, according to that the best option would be to orient your applications to services or another architecture that allows you to uncouple The applications of your project, look rest framework as an option

Use cases for new Django 1.4 project structure?

I guess this is sort of a followup question to Where should i create django apps in django 1.4? The final answer there seemed to be "nobody knows why Django changed the project structure" -- which seems a bit unsatisfactory.
We are starting up a new Django project, and currently we are following the basic structure outlined at http://www.deploydjango.com/django_project_structure/index.html:
├── project
│ ├── apps
│ │ ├── app1
│ │ └── app2
│ ├── libs
│ │ ├── lib1
│ │ └── lib2
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
But I think we also are anticipating a multi-developer environment comprising largely independent applications with common project-level components, so it seems cleaner to me to separate out the project and app paths.
├── project
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── apps
│ ├── app1
│ └── app2
├── libs
│ ├── lib1
│ └── lib2
└── manage.py
It's hard to come up with any specific, non-stylistic rationale for this, though. (I've mostly worked only with single-app projects before now, so I may be missing something here.)
Mainly, I'm motivated by the fact that Django 1.4 seems to be moving in the latter direction. I assume there is some rationale or expected use case that motivated this change, but I've only seen speculation on what it might be.
Questions:
What was the motivation for the 1.4 project structure change?
Are there use cases where having apps inside/outside of the project makes a non-trivial impact?
It's much easier to extract an app from a project because there are no more imports like this:
from projectname.appname.models import MyModel
instead you import them the same way you would import apps which are installed via a python package
if you use i18n then this could make an impact because makemessages searches for translation strings in the current directory. a good way to translate apps and the project using a single .po file is to create the locale folder outside the project dir
├── project
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── app1
├── app2
├── locale
│ ├── en
│ └── de
└── manage.py
I marked the earlier response as an answer, but I ran into this blog post off the IRC archives that seems to have some additional info.
http://blog.tinbrain.net/blog/2012/mar/15/django-vs-pythonpath/
As I understand it, the gist is:
When you're developing, manage.py implicitly sets up PYTHONPATH to see the project-level code, with the result that import myapp works for an app defined inside the project.
When you deploy, you generally don't run manage.py, so you would have to say import myproject.myapp, thus things break on deployment if you don't know about this.
The "standard" fix is to add the project to PYTHONPATH, but this results in double-imports (myapp and myproject.myapp), which can generate weird behavior on things like signals.
So the 1.4 project structure seems mainly intended to eliminate the possibility of devs relying on an odd effect of manage.py.