I have multiple packages under a subdirectory under src/,
running the tests for each package with go test is working fine.
When trying to run all tests with go test ./... the tests are running but it fails..
the tests are running against local database servers, each test file has global variables with db pointers.
I tried to run the tests with -parallel 1 to prevent contention in the db, but the tests still fail.
what can be the issue here?
EDIT: some tests are failing on missing DB entries, I completely clear the DB before and after each test. the only reason I can think of why this is happening is because of some contention between tests.
EDIT 2:
each one of my test files has 2 global variables (using mgo):
var session *mgo.Session
var db *mgo.Database
also it has the following setup and teardown functions:
func setUp() {
s, err := cfg.GetDBSession()
if err != nil {
panic(err)
}
session = s
db = cfg.GetDB(session)
db.DropDatabase()
}
func tearDown() {
db.DropDatabase()
session.Close()
}
each tests startup with setUp() and defer tearDown()
also cfg is:
package cfg
import (
"labix.org/v2/mgo"
)
func GetDBSession() (*mgo.Session, error) {
session, err := mgo.Dial("localhost")
return session, err
}
func GetDB(session *mgo.Session) *mgo.Database {
return session.DB("test_db")
}
EDIT 3:
I changed cfg to use a random database, the tests passed.
it seems that the tests from multiple packages are running somewhat in parallel.
is it possible to force go test to run everything sequentially across packages ?
Update: As pointed out by #Gal Ben-Haim, adding the (undocumented) go test -p 1 flag builds and tests all packages in serial. As put by the testflag usage message in the Go source code:
-p=n: build and test up to n packages in parallel
Old answer:
When running go test ./..., the tests of the different packages are in fact run in parallel, even if you set parallel=1 (only tests within a specific package are guaranteed to be run one at a time). If it is important that the packages be tested in sequence, like when there is database setup/teardown involved, it seems like the only way right now is to use the shell to emulate the behavior of go test ./..., and forcing the packages to be tested one by one.
Something like this, for example, works in Bash:
find . -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test
The command first lists all the subdirectories containing *.go files. Then it uses sort -u to list each subdirectory only once (removing duplicates). Finally all the subdirectories containing go files get fed to go test via xargs. The -P1 indicates that at most one command is to be run at a time.
Unfortunately, this is a lot uglier than just running go test ./..., but it might be acceptable if it is put into a shell script or aliased into a function that's more memorable:
function gotest(){ find $1 -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test; }
Now all tests can be run in the current directory by calling:
gotest .
apparently running go test -p 1 runs everything sequentially (including build), I haven't see this argument in go help test or go help testflag
I am assuming that because the packages individually pass that in this situation you are also dropping the DB before that test as well.
Therefore it sounds like the state of the DB for each package test is expected to be empty.
So between each set of the package tests the DB must be emptied. There are two ways around this, not knowing your entire situation I will briefly explain both options:
Option 1. Test Setup
Add an init() function to the start of each package _test file which you then put processing to remove the DB. This will be run before the init() method of the actual package:
func init() {
fmt.Println("INIT TEST")
// My test state initialization
// Remove database contents
}
Assuming that the package also had a similar print line you would see in the output (note the stdout output is only displayed when the a test fails or you supply the -v option)
INIT TEST
INIT PACKAGE
Option 2. Mock the database
Create a mock for the database (unless that is specifically what you are testing). The mock db can always act like the DB is blank for the starting state of each test.
Please try out the following github repository.
https://github.com/appleboy/golang-testing
Copy coverage.sh to /usr/local/bin/coverage and change permission.
$ curl -fsSL https://raw.githubusercontent.com/appleboy/golang-testing/master/coverage.sh /usr/local/bin/coverage
$ chmod +x /usr/local/bin/coverage
Related
I'm new to Golang and I'm trying to run some tests from a golang app.
Tests are running at a docker container with a Golang 1.12.
My problem is that some tests appear to be running correctly, but others do not.
Example:
I have a function that I wanted to fail on purpose.
func TestLol(t *testing.T) {
assert.EqualValues(t, 1, 2)
t.Fail()
}
When I execute the docker container with a "docker run ... go test -v ./..." it should run all tests and about this function on specific it should fail, but what happens that it doesn't fail. Golang just log a "ok" beside the test.
Then I tried to run only the folder with test file that should fail.
Log:
ok github.com/eventials/csw-notifications/services 0.016s
2021/09/25 21:08:44 Command finished successfully.
Tests exited with status code: 0
Stopping csw-notifications_db_1 ... done
Stopping csw-notifications_broker_1 ... done
Going to remove csw-notifications_app_run_ed70597b5c20, csw-notifications_db_1, csw-notifications_broker_1
Removing csw-notifications_app_run_ed70597b5c20 ... done
Removing csw-notifications_db_1 ... done
Removing csw-notifications_broker_1 ... done
My question is why golang dosn't output any log with a FAIL message for this file in specific?
I think it's somewhat related to this question, but as it didn't received any answear I'm reposting it.
Why the tests are not running ? ( Golang ) - goapp test - bug?
EDIT: I'm editting to this question be more clear.
You can try add -timeout
If your test files contain FuncTest with samenames rename.
You can try change output to go test -v -json ./...
-timeout d
If a test binary runs longer than duration d, panic.
If d is 0, the timeout is disabled.
The default is 10 minutes (10m).
main.go
func Start(s ...string) {
fmt.Println(s)
}
main_test.go
func TestStart(t *testing.T) {
Start("rocket")
if 0 == 1 {
t.Log("Houston, we have a problem")
t.Fail()
}
}
func ExampleStart() {
fmt.Println("Ground Control to Major Tom")
// Output:
// Ground Control to Major Tom
}
Change if condition to 0 == 0 to see Fail with somelogs.
I found out that a import was causing this issue.
Every test file importing porthos-go or porthos-go/mock isn't running the test files. Removing those imports fixed my problem.
lib: https://github.com/porthos-rpc/porthos-go
I still don't know why, but when I do I'll update this answear.
I'm defining my infrastructure in Terraform files. I like Terraform a lot, but I'm having trouble figuring out how to test. I have awspec, which is really nice and runs RSpec-like tests against the result of your build via the AWS API. But is there a way to do unit tests, like on the results of terraform plan? What kind of workflow are others using with Terraform?
I'm going to expand on Begin's answer with more information about Kitchen-Terraform.
Kitchen-Terraform is a set of open source plugins that run within Test-Kitchen, these are supposed to go into your Terraform module repository to test that module's functionality before being used in a repository that creates the resources. Please feel free to check the documentation of those two projects for more details, but I will go through my recommendations for integration testing your Terraform code.
Install Ruby, Terraform
For this example, the Terraform module repo will be called: my_terraform_module
mkdir -p my_terraform_module
cd my_terraform_module
mkdir -p test/integration/kt_suite/controls \
test/fixtures/tf_module/
Create a Gemfile:
source "https://rubygems.org/" do
gem "kitchen-terraform"
end
Install the necessary components (uses the Gemfile for the dependencies of kitchen-terraform)
gem install bundler
bundle install
Create the Test-Kitchen file .kitchen.yml - this brings together the testing frame, Test-Kitchen and Kitchen-Terraform
---
driver:
name: terraform
root_module_directory: test/fixtures/tf_module
parallelism: 4
provisioner:
name: terraform
transport:
name: ssh
verifier:
name: terraform
groups:
- name: basic
controls:
- file_check
- state_file
platforms:
- name: terraform
suites:
- name: kt_suite
Your Terraform code should be at the root of the Terraform module repository such as:
my_terraform_module/
|-- main.tf
Example code that can go in main.tf
resource "null_resource" "create_file" {
provisioner "local-exec" {
command = "echo 'this is my first test' > foobar"
}
}
Then we reference the Terraform module just like we would in Terraform live repos - but in a test fixture instead in this file: test/fixtures/tf_module/main.tf
module "kt_test" {
source = "../../.."
}
Then from there, you can run Terraform apply, but it's done a little differently with Kitchen-Terraform and Test-Kitchen, you run a converge which helps keep track of state and a couple other items.
bundle exec kitchen converge
Now you've seen your Terraform code do an apply, we need to test it. We can test the actual resources that were created, which would be like an integration test, but we can also test the state file, which is a semi unit test, but I am not aware of anything that can currently do unit tests against the HCL code of Terraform.
Create an inspec default profile file: test/integration/kt_suite/inspec.yml
---
name: default
Create an Inspec control for your integration testing: test/integration/kt_suite/controls/basic.rb - I'm using a test for the example Terraform code I used earlier for the main.tf
# frozen_string_literal: true
control "file_check" do
describe file('.kitchen/kitchen-terraform/kt-suite-terraform/foobar') do
it { should exist }
end
end
And this is an example test of pulling information from the state file and testing if something exists in it. This is a basic one, but you can definitely exand on this example.
# frozen_string_literal: true
terraform_state = attribute "terraform_state", {}
control "state_file" do
describe "the Terraform state file" do
subject do json(terraform_state).terraform_version end
it "is accessible" do is_expected.to match /\d+\.\d+\.\d+/ end
end
end
Then run Inspec controls with Test-Kitchen and Kitchen-Terraform:
bundle exec kitchen verify
I took a lot of this from the getting started guide and some of the tutorials over here: https://newcontext-oss.github.io/kitchen-terraform/getting_started.html
We recently open sourced Terratest, our swiss army knife for testing infrastructure code.
Today, you're probably testing all your infrastructure code manually by deploying, validating, and undeploying. Terratest helps you automate this process:
Write tests in Go.
Use helpers in Terratest to execute your real IaC tools (e.g., Terraform, Packer, etc.) to deploy real infrastructure (e.g., servers) in a real environment (e.g., AWS).
Use helpers in Terratest to validate that the infrastructure works correctly in that environment by making HTTP requests, API calls, SSH connections, etc.
Use helpers in Terratest to undeploy everything at the end of the test.
Here's an example test for some Terraform code:
terraformOptions := &terraform.Options {
// The path to where your Terraform code is located
TerraformDir: "../examples/terraform-basic-example",
}
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// Run `terraform output` to get the value of an output variable
instanceUrl := terraform.Output(t, terraformOptions, "instance_url")
// Verify that we get back a 200 OK with the expected text
// It can take a minute or so for the Instance to boot up, so retry a few times
expected := "Hello, World"
maxRetries := 15
timeBetweenRetries := 5 * time.Second
http_helper.HttpGetWithRetry(t, instanceUrl, 200, expected, maxRetries, timeBetweenRetries)
These are integration tests, and depending on what you're testing, can take 5 - 50 minutes. It's not fast (though using Docker and test stages, you can speed some things up), and you'll have to work to make the tests reliable, but it is well worth the time.
Check out the Terratest repo for docs and lots of examples of various types of infrastructure code and the corresponding tests for them.
From my research this is a tough issue, since Terraform is not meant to be a full featured programming language and you are declaring what resources you want with Terraform, not how to build them, trying to unit-test doesn't really give you the assurance you are building resources how you'd like without actually running an apply. This makes attempts to unit-test feel more like a linting to me.
However, you could parse your HCL files with something like pyhcl, or parse you're plan files, however from my experience this was a lot of work for little benefit (but I could be missing an easier method!).
Here are some alternatives if you wanted to test the results of your terraform applys:
kitchen-terraform is a tool for writing Test Kitchen specs for your infrastructure.
kitchen-verifier-awspec helps bring together awspec and kitchen-terraform, although I have not used it personally.
If you are using AWS, I have found AWS Config to be able to provide a lot of the same benefits as other infrastructure testing tools, without as much setup/maintenance. Although it is fairly new, and I have not used it extensively.
Also if you are paying for Terraform Premium you get access to Sentinel, which seems to provide a lot of similar benefits to AWS Config, however I have not used it personally.
In addition to the answers, I will add my two cents. I was not very happy using GO lang with Terratest although it works perfectly well. It is just that GO is not my favorite programming language. I looked for some frameworks in Java and I found terraform-maven. At first glance, I only found examples in Groovy, but since Groovy run on JVM, it is feasible to implement the same examples in Java.
I translated part of the S3PreProvisionSpec.groovy to Java. It is testing this main.tf file.
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class S3PreProvisionTest {
private final String TF_CHANGE = "change";
private final String TF_AFTER = "after";
private final String TF_TAGS = "tags";
private final Map<String, String> mandatoryTags = Map.of(
"application_id", "cna",
"stack_name", "stacked",
"created_by", "f.gutierrez#yieldlab.de"
);
private Terraform terraform;
private TfPlan tfplan;
#BeforeAll
void setup() {
terraform = new Terraform().withRootDir("s3_pre_post_demo")
// .withProperties(Map.of("noColor", "true"))
;
tfplan = terraform.initAndPlan();
}
#AfterAll
void cleanup() {
terraform.destroy();
}
#Test
void mandatoryTagsForS3Resources() {
List<Map> s3Bucket = tfplan.getResourcesByType("aws_s3_bucket");
System.out.println("=========================");
s3Bucket.forEach(map -> {
Map tfChangeMap = (Map) map.get(TF_CHANGE);
Map tfAfterMap = (Map) tfChangeMap.get(TF_AFTER);
Map tfTagsMap = (Map) tfAfterMap.get(TF_TAGS);
assertEquals(3, tfTagsMap.size());
mandatoryTags.forEach((k, v) -> {
assertEquals(v, tfTagsMap.get(k));
});
try {
JSONObject jsonObject = new JSONObject(map);
JSONObject jsonChange = jsonObject.getJSONObject(TF_CHANGE);
JSONObject jsonAfter = jsonChange.getJSONObject(TF_AFTER);
JSONObject jsonTags = jsonAfter.getJSONObject(TF_TAGS);
System.out.println(">>>>>>>>>>>>>>>>>>>> " + jsonTags.toString());
mandatoryTags.forEach((k, v) -> {
try {
assertEquals(v, jsonTags.getString(k));
} catch (JSONException e) {
e.printStackTrace();
}
});
} catch (JSONException e) {
e.printStackTrace();
}
});
}
}
One approach is to output the results to a file using -out=tempfile, then run a script to validate whatever you're trying to do, and if all passes you can pass the file into the apply command.
look at -out here:
https://www.terraform.io/docs/commands/plan.html
You can use github.com/palantir/tfjson to parse a .plan file to json.
There is an issue at the moment that give a "unknown plan file version: 2" error. This is because the vendored version of terraform is too old.
The fix is:
go get github.com/palantir/tfjson
cd $GOPATH/src/github.com/palantir/tfjson
rm -rf vendor
go get -v ./...
There is then an error in ../../hashicorp/terraform/config/testing.go. To fix just change the line
t.Helper()
to
//t.Helper()
Run go get again and then go install
go get -v ./...
go install ./...
You should then be able to do the following which will produce json output.
terraform plan --out=terraform.plan
tfjson terraform.plan
I'm actually trying to run the unittests I've created thanks to Odoo's documentation.
I've built my module like this :
module_test
- __init__.py
__openerp.py__
...
- tests
__init__.py
test_1.py
Inside 'module_test/tests/init.py', I do have "import test_1"
Inside, 'module_test/tests/test_1.py", I do have : "import tests + a test scenario I've written.
Then I launch the command line to run server, and I add :
'-u module_test --log-level=test --test-enable' to update the module and activate the tests run
The shell returns : "All post-tested in 0.00s, 0 queries".
So in fact, no tests are run.
I then added a syntax error, so the file can't be compiled by the server, but shell returned the same sentence. It looks like the file is ignored, and the server is not even trying to compile my file... I do not understand why ?
I've checked some Odoo source module, the 'sale' one for example.
I've tried to run sale tests, shell returned the same value than before.
I added syntax error inside sale tests, shell returned the same value again, and again.
Does anyone have an idea about this unexpected behavior ?
You should try using post_install decorator for test class:
Example:
from openerp.tests import common
#common.post_install(True)
class TestPost(common.TransactionCase):
def test_post_method(self):
response = self.env['my_module.my_model'].create_post('hello')
self.assertEqual(response['success'], True)
To make the tests perform faster without updating your module, you should be able to run tests without
-u module_test
if you use
--load=module_test
I have to admit that odoo testing documentation is really bad. It took me a week to figure out how to make unit testing work in odoo.
I am using Robot Framework to automate onboard unit testing of a Linux based device.
The device has a directory /data/tests that contains a series of subdirectories, each subdirectory is a test module with 'run.sh' to be executed to run the unit test. For example :
/data/tests/module1/run.sh
/data/tests/module2/run.sh
I wrote a function that collects the subdirectory names in an array, and this is the list of test modules to be executed. The number of modules can vary daily.
#{modules}= SSHLibrary.List Directories in Directory /data/tests
Then another function (Module Test) that basically runs a FOR loop on the element list and executes the run.sh in each subdirectory, collects log data, and logs it to the log.html file.
The issue I am experiencing is that when the log.html file is created, there is one test case titled Module Test, and under the FOR loop, a 'var' entry for each element (test module). Under each 'var' entry are the results of the module execution.
Is it possible from within the FOR loop, to create a test case for each element and log results against it? Right now, if one of the modules / elements fails, I do not get accurate results, I still get a pass for the Module Test test case. I would like to log test cases Module 1, Module 2, ... , Module N, with logs and pass fail for each one. Given that the number of modules can vary from execution to execution, I cannot create static test cases, I need to be able to dynamically create the test cases once the number of modules has been determined for the test run.
Any input is greatly appreciated.
Thanks,
Dan.
You can write a simple script that dynamically create the robot test file by reading the /data/test/module*, then create one test case for each of the modules. In each test case, simply run the operating system command and check the return code (the run.sh).
This way, you get one single test suite, with many test cases, each representing a module.
Consider writing a bash script that would run robot test for each module, and then merge reports to one report with rebot script. Use a --name parameter in pybot script to differentiate tests in report.
Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.