How to generate coverage for multiple packages using go test in custom folders? - unit-testing

We have following project structure:
├── Makefile
├── ...
├── src
│   ├── app
│   │   ├── main.go
│ │ ├── models
│ │ ├── ...
│ │ └── dao.go
│   │   ├── ...
│   │   └── controllers
│ │ ├── ...
│ │ └── pingController.go
│   └── test
│   ├── all_test.go
│   ├── ...
│   └── controllers_test.go
└── vendor
└── src
├── github.com
├── golang.org
└── gopkg.in
I want to measure coverage of packages in src/app by tests in src/test. And currently generating coverage profile by running custom script that runs coverage for each package in app and then merges all coverage profiles into one file. Recently I heard that in go1.10 we are able to generate coverage for multiple packages.
So I tried to replace that script with oneliner, and tried running
GOPATH=${PROJECT_DIR}:${PROJECT_DIR}/vendor go test -covermode count -coverprofile cover.out -coverpkg all ./src/test/...
It gives me "ok test 0.475s coverage: 0.0% of statements in all"
When I do
cd src/test/
GOPATH=${PROJECT_DIR}:${PROJECT_DIR}/vendor go test -covermode count -coverprofile cover.out -coverpkg all
Logs show that specs are runned and tests are successfull, but still I have "coverage: 0.0% of statements in all" and empty cover.out.
What am I missing to properly compute coverage of packages in app by tests in test?

You can't with the current state of go test but you can always use third party scripts.
https://github.com/grosser/go-testcov

Short answer:
go test -race -coverprofile=coverage.txt -covermode=atomic ./... # Run all the tests with the race detector enabled
go test -bench=. -benchmem ./... # Run all the benchmark for 3s and print memory information
In order to create a test for a Go code, you have to create a file (in the same folder of the root code) that have the same name of the code, and append "_test" to the name. The package have to be the same too.
So, if I have a GO Code called strings.go, the relative test suite have to be named: strings_test.go.
After that, you have to create a method that have in input the t *testing.T
struct, and the name of the method have to start with Test or Benchmark word.
So, if the strings.go contains a method called "IsUpper", the related test-case is a method called TestIsUpper(t *testing.T).
If you need the Benchmark, than you need to substitute the Test word with Benchmark, so the name of the method will be BenchmarkIsUpper, and the struct that the method take in input is b *testing.B.
You can have a look at the following link in order to see the tree-structure necessary for execute the test in GO: https://github.com/alessiosavi/GoGPUtils.
There you can find Benchmark and TestCase.
Here an example of tree-struct
├── string
│ ├── stringutils.go
│ └── stringutils_test.go

Related

How do I run unit tests from smaller applications inside of a parent directory?

I'm working in a repository with multiple smaller applications for various lambdas. I'd like to be able to run cargo test from the top level directory, but I can't seem to find a way to get this to work since the files aren't nested within a top level src directory.
├── cloudformation
├── apps
│ ├── app1
│ │ └── src
│ └── app2
│ └── src
└── otherStuff
Ideally I could run cargo test from the top level and it would dig into apps and run tests from the src directory nested within each individual app. Is there a way to accomplish this?

How to run tests and get coverage in a golang application with a particular structure?/ [duplicate]

This question already has answers here:
How to detect code-coverage of separated folders in GO?
(3 answers)
Closed 1 year ago.
Let's say my golang project has a directory structure like this:
├── go.mod
├── main.go
├── pack
│   └── runner.go
└── test
└── pack
└── runner_test.go
How do I run the tests and get the coverage?
Simply running go test -cover does not work with this directory strucure. I get [no test files]. It only works when runner_test.go and runner.go are in the same directory.
It's recommended for the code and its tests to be in the same directory, because they're typically in the same package.
Moreover, you can run all tests in a module from its root directory with go test ./... -- this will run tests of all packages in the module.
I'd modify your code structure slightly to be:
├── go.mod
├── main.go
├── pack
└── runner.go
└── runner_test.go
Assuming the code in runner.go is in package pack (note that in Go the package name is the containing dir's name).
See Testing in the official "How to Write Go Code", and Add a test in the Go tutorial.

Django & coverage - .coveragerc omit not working

I have a Django project with multiple applications. In one application, I have a library already tested using unittest + coverage.
When I run the test of the django project, I would like to omit this folder.
My project architecture is :
project/
├── application1/
│ ├── tests.py
│ ├── views.py
│ ├── ...
│ └── my_lib/ << The lib I want to omit
│ ├── tests.py
│ ├── script.py
│ ├── __init__.py
│ └── ...
├── application2/
│ ├── tests.py
│ ├── views.py
│ └── ...
├── application3/
│ ├── tests.py
│ ├── views.py
│ └── ...
├── .coveragerc
├── runtest.bat
├── manage.py
└── ...
runtest.bat is :
CALL activate ./venv
coverage erase
coverage run manage.py test
coverage html
PAUSE
Based on several tutorials and SO questions, I've tried several .coveragerc files but none of them properly skipped the library. Testing it creates a fail because it tries to load with the incorrect relative path.
.coveragerc is :
[run]
omit =
*/application1/my_lib/*
[report]
exclude_lines =
if __name__ == .__main__.:
show_missing = True
I also tried :
[run]
source = .
omit =
/application1/my_lib/*
[run]
source = project/*
omit =
*/application1/my_lib/*
[run]
source = .
omit =
*/application1/my_lib/*
Do you have any clue what am I doing wrong ?
For information :
Django version is 2.2.5
Coverage version is 4.5.4
Python version is 3.7
Few sources :
Django Coverage
Coverage Documentation
SO question
Thanks in advance
EDIT 1:
Just as a background. my_lib is mainly a file containing a class. This code is just embedded in the application to be used "like" a standard library.
In application1/views.py, I simply have a from . import my_lib.
In application1/my_lib/__init__.py, I have from .script import MyClass
The objective is simply to be able then to use in my view with my_lib.MyClass.do_something()
Now the reason why I would like to exclude this "library" from the coverage is because this was developped out of the application. It has his own unittest in applications1/my_lib/tests/py starting with from script import MyClass.
When I run the coverage in the root of the project, Python cannot find script.py in the root of the project so it triggers the error
File "path_to/project/application1/my_lib/tests.py", line 3
from script import MyClass
ModuleNotFoundError: No module named script.
In the worst case scenario, I could put this library to site-packages of my virtual environment bu I would like to avoid it (there will be probably multiple similar my_lib with at most 1 per application)
EDIT 2:
As a temporary solution, I simply renamed application1/my_lib/tests.py by application1/my_lib/script_tests.py. There is no file starting by test so the coverage does not care about this folder anymore.
The other alternative to run the test at the same time as the project required quite a lot of updates due to relative path used to several files

How to organize test resource files when using waf

I am using Google Tests in a project with waf as a build system. I want to know an effective way of dealing with resource files.
For a directory structure like the following:
MyProject
├── build
├── src
| ├──do_something.cpp
| ├──do_something.h
├── test
| ├── unit_test.cpp
│ ├── resources
│ │ ├── input1.txt
│ │ ├── input2.txt
├── wscript
After building, to run the tests from the build directory, I would need the resource files to be copied across. The build directory would look like:
MyProject
├── build
| ├── test
│ │ ├── resources
│ │ | ├── input1.txt
│ │ | ├── input2.txt
│ │ ├── unit_test
To achieve this, my current wscript is:
def options(opt):
opt.load('compiler_cxx')
def configure(conf):
conf.load('compiler_cxx')
def build(bld):
bld.stlib(source='src/do_something.cpp',
target='mylib',
includes='src')
bld.exec_command("cp -r test/resources build/test")
bld.program(source='test/unit_test.cpp',
includes='src',
target='test/unit_test',
use='mylib')
Using the bld.exec_command like this seems hacky. What is a better way? How are other people organizing their test resources with waf?
I am using waf 1.9.5.
The easiest way is of course to change your program so that it can read resources from an arbitrary location. Why litter the file system with multiple copies of the same files?
That said, you can easily copy a directory tree recursively using:
for f in ctx.path.ant_glob('tests/resources/**/*'):
ctx(features = 'subst', source = f.srcpath(),
target = f.srcpath())

Use cases for new Django 1.4 project structure?

I guess this is sort of a followup question to Where should i create django apps in django 1.4? The final answer there seemed to be "nobody knows why Django changed the project structure" -- which seems a bit unsatisfactory.
We are starting up a new Django project, and currently we are following the basic structure outlined at http://www.deploydjango.com/django_project_structure/index.html:
├── project
│ ├── apps
│ │ ├── app1
│ │ └── app2
│ ├── libs
│ │ ├── lib1
│ │ └── lib2
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
But I think we also are anticipating a multi-developer environment comprising largely independent applications with common project-level components, so it seems cleaner to me to separate out the project and app paths.
├── project
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── apps
│ ├── app1
│ └── app2
├── libs
│ ├── lib1
│ └── lib2
└── manage.py
It's hard to come up with any specific, non-stylistic rationale for this, though. (I've mostly worked only with single-app projects before now, so I may be missing something here.)
Mainly, I'm motivated by the fact that Django 1.4 seems to be moving in the latter direction. I assume there is some rationale or expected use case that motivated this change, but I've only seen speculation on what it might be.
Questions:
What was the motivation for the 1.4 project structure change?
Are there use cases where having apps inside/outside of the project makes a non-trivial impact?
It's much easier to extract an app from a project because there are no more imports like this:
from projectname.appname.models import MyModel
instead you import them the same way you would import apps which are installed via a python package
if you use i18n then this could make an impact because makemessages searches for translation strings in the current directory. a good way to translate apps and the project using a single .po file is to create the locale folder outside the project dir
├── project
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── app1
├── app2
├── locale
│ ├── en
│ └── de
└── manage.py
I marked the earlier response as an answer, but I ran into this blog post off the IRC archives that seems to have some additional info.
http://blog.tinbrain.net/blog/2012/mar/15/django-vs-pythonpath/
As I understand it, the gist is:
When you're developing, manage.py implicitly sets up PYTHONPATH to see the project-level code, with the result that import myapp works for an app defined inside the project.
When you deploy, you generally don't run manage.py, so you would have to say import myproject.myapp, thus things break on deployment if you don't know about this.
The "standard" fix is to add the project to PYTHONPATH, but this results in double-imports (myapp and myproject.myapp), which can generate weird behavior on things like signals.
So the 1.4 project structure seems mainly intended to eliminate the possibility of devs relying on an odd effect of manage.py.