I'm defining my infrastructure in Terraform files. I like Terraform a lot, but I'm having trouble figuring out how to test. I have awspec, which is really nice and runs RSpec-like tests against the result of your build via the AWS API. But is there a way to do unit tests, like on the results of terraform plan? What kind of workflow are others using with Terraform?
I'm going to expand on Begin's answer with more information about Kitchen-Terraform.
Kitchen-Terraform is a set of open source plugins that run within Test-Kitchen, these are supposed to go into your Terraform module repository to test that module's functionality before being used in a repository that creates the resources. Please feel free to check the documentation of those two projects for more details, but I will go through my recommendations for integration testing your Terraform code.
Install Ruby, Terraform
For this example, the Terraform module repo will be called: my_terraform_module
mkdir -p my_terraform_module
cd my_terraform_module
mkdir -p test/integration/kt_suite/controls \
test/fixtures/tf_module/
Create a Gemfile:
source "https://rubygems.org/" do
gem "kitchen-terraform"
end
Install the necessary components (uses the Gemfile for the dependencies of kitchen-terraform)
gem install bundler
bundle install
Create the Test-Kitchen file .kitchen.yml - this brings together the testing frame, Test-Kitchen and Kitchen-Terraform
---
driver:
name: terraform
root_module_directory: test/fixtures/tf_module
parallelism: 4
provisioner:
name: terraform
transport:
name: ssh
verifier:
name: terraform
groups:
- name: basic
controls:
- file_check
- state_file
platforms:
- name: terraform
suites:
- name: kt_suite
Your Terraform code should be at the root of the Terraform module repository such as:
my_terraform_module/
|-- main.tf
Example code that can go in main.tf
resource "null_resource" "create_file" {
provisioner "local-exec" {
command = "echo 'this is my first test' > foobar"
}
}
Then we reference the Terraform module just like we would in Terraform live repos - but in a test fixture instead in this file: test/fixtures/tf_module/main.tf
module "kt_test" {
source = "../../.."
}
Then from there, you can run Terraform apply, but it's done a little differently with Kitchen-Terraform and Test-Kitchen, you run a converge which helps keep track of state and a couple other items.
bundle exec kitchen converge
Now you've seen your Terraform code do an apply, we need to test it. We can test the actual resources that were created, which would be like an integration test, but we can also test the state file, which is a semi unit test, but I am not aware of anything that can currently do unit tests against the HCL code of Terraform.
Create an inspec default profile file: test/integration/kt_suite/inspec.yml
---
name: default
Create an Inspec control for your integration testing: test/integration/kt_suite/controls/basic.rb - I'm using a test for the example Terraform code I used earlier for the main.tf
# frozen_string_literal: true
control "file_check" do
describe file('.kitchen/kitchen-terraform/kt-suite-terraform/foobar') do
it { should exist }
end
end
And this is an example test of pulling information from the state file and testing if something exists in it. This is a basic one, but you can definitely exand on this example.
# frozen_string_literal: true
terraform_state = attribute "terraform_state", {}
control "state_file" do
describe "the Terraform state file" do
subject do json(terraform_state).terraform_version end
it "is accessible" do is_expected.to match /\d+\.\d+\.\d+/ end
end
end
Then run Inspec controls with Test-Kitchen and Kitchen-Terraform:
bundle exec kitchen verify
I took a lot of this from the getting started guide and some of the tutorials over here: https://newcontext-oss.github.io/kitchen-terraform/getting_started.html
We recently open sourced Terratest, our swiss army knife for testing infrastructure code.
Today, you're probably testing all your infrastructure code manually by deploying, validating, and undeploying. Terratest helps you automate this process:
Write tests in Go.
Use helpers in Terratest to execute your real IaC tools (e.g., Terraform, Packer, etc.) to deploy real infrastructure (e.g., servers) in a real environment (e.g., AWS).
Use helpers in Terratest to validate that the infrastructure works correctly in that environment by making HTTP requests, API calls, SSH connections, etc.
Use helpers in Terratest to undeploy everything at the end of the test.
Here's an example test for some Terraform code:
terraformOptions := &terraform.Options {
// The path to where your Terraform code is located
TerraformDir: "../examples/terraform-basic-example",
}
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// Run `terraform output` to get the value of an output variable
instanceUrl := terraform.Output(t, terraformOptions, "instance_url")
// Verify that we get back a 200 OK with the expected text
// It can take a minute or so for the Instance to boot up, so retry a few times
expected := "Hello, World"
maxRetries := 15
timeBetweenRetries := 5 * time.Second
http_helper.HttpGetWithRetry(t, instanceUrl, 200, expected, maxRetries, timeBetweenRetries)
These are integration tests, and depending on what you're testing, can take 5 - 50 minutes. It's not fast (though using Docker and test stages, you can speed some things up), and you'll have to work to make the tests reliable, but it is well worth the time.
Check out the Terratest repo for docs and lots of examples of various types of infrastructure code and the corresponding tests for them.
From my research this is a tough issue, since Terraform is not meant to be a full featured programming language and you are declaring what resources you want with Terraform, not how to build them, trying to unit-test doesn't really give you the assurance you are building resources how you'd like without actually running an apply. This makes attempts to unit-test feel more like a linting to me.
However, you could parse your HCL files with something like pyhcl, or parse you're plan files, however from my experience this was a lot of work for little benefit (but I could be missing an easier method!).
Here are some alternatives if you wanted to test the results of your terraform applys:
kitchen-terraform is a tool for writing Test Kitchen specs for your infrastructure.
kitchen-verifier-awspec helps bring together awspec and kitchen-terraform, although I have not used it personally.
If you are using AWS, I have found AWS Config to be able to provide a lot of the same benefits as other infrastructure testing tools, without as much setup/maintenance. Although it is fairly new, and I have not used it extensively.
Also if you are paying for Terraform Premium you get access to Sentinel, which seems to provide a lot of similar benefits to AWS Config, however I have not used it personally.
In addition to the answers, I will add my two cents. I was not very happy using GO lang with Terratest although it works perfectly well. It is just that GO is not my favorite programming language. I looked for some frameworks in Java and I found terraform-maven. At first glance, I only found examples in Groovy, but since Groovy run on JVM, it is feasible to implement the same examples in Java.
I translated part of the S3PreProvisionSpec.groovy to Java. It is testing this main.tf file.
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class S3PreProvisionTest {
private final String TF_CHANGE = "change";
private final String TF_AFTER = "after";
private final String TF_TAGS = "tags";
private final Map<String, String> mandatoryTags = Map.of(
"application_id", "cna",
"stack_name", "stacked",
"created_by", "f.gutierrez#yieldlab.de"
);
private Terraform terraform;
private TfPlan tfplan;
#BeforeAll
void setup() {
terraform = new Terraform().withRootDir("s3_pre_post_demo")
// .withProperties(Map.of("noColor", "true"))
;
tfplan = terraform.initAndPlan();
}
#AfterAll
void cleanup() {
terraform.destroy();
}
#Test
void mandatoryTagsForS3Resources() {
List<Map> s3Bucket = tfplan.getResourcesByType("aws_s3_bucket");
System.out.println("=========================");
s3Bucket.forEach(map -> {
Map tfChangeMap = (Map) map.get(TF_CHANGE);
Map tfAfterMap = (Map) tfChangeMap.get(TF_AFTER);
Map tfTagsMap = (Map) tfAfterMap.get(TF_TAGS);
assertEquals(3, tfTagsMap.size());
mandatoryTags.forEach((k, v) -> {
assertEquals(v, tfTagsMap.get(k));
});
try {
JSONObject jsonObject = new JSONObject(map);
JSONObject jsonChange = jsonObject.getJSONObject(TF_CHANGE);
JSONObject jsonAfter = jsonChange.getJSONObject(TF_AFTER);
JSONObject jsonTags = jsonAfter.getJSONObject(TF_TAGS);
System.out.println(">>>>>>>>>>>>>>>>>>>> " + jsonTags.toString());
mandatoryTags.forEach((k, v) -> {
try {
assertEquals(v, jsonTags.getString(k));
} catch (JSONException e) {
e.printStackTrace();
}
});
} catch (JSONException e) {
e.printStackTrace();
}
});
}
}
One approach is to output the results to a file using -out=tempfile, then run a script to validate whatever you're trying to do, and if all passes you can pass the file into the apply command.
look at -out here:
https://www.terraform.io/docs/commands/plan.html
You can use github.com/palantir/tfjson to parse a .plan file to json.
There is an issue at the moment that give a "unknown plan file version: 2" error. This is because the vendored version of terraform is too old.
The fix is:
go get github.com/palantir/tfjson
cd $GOPATH/src/github.com/palantir/tfjson
rm -rf vendor
go get -v ./...
There is then an error in ../../hashicorp/terraform/config/testing.go. To fix just change the line
t.Helper()
to
//t.Helper()
Run go get again and then go install
go get -v ./...
go install ./...
You should then be able to do the following which will produce json output.
terraform plan --out=terraform.plan
tfjson terraform.plan
Related
I want to test logic of my build.gradle script.
Excerpt of the script would be:
(...other tasks and methods...)
def readCustomerFile(File file) {
def schema = <prepare schema>
def report = schema.validate(JsonLoader.fromFile(file))
if (!report.success) {
throw new GradleException("File is not valid! " + report.toString())
}
return new groovy.json.JsonSlurper().parse(file)
}
task readFiles {
mustRunAfter 'prepareCustomerProject'
doLast {
if (System.env.CUSTOMER_FILE_OVERRIDE) {
project.ext.customerFileData = readCustomerFile(System.env.CUSTOMER_FILE_OVERRIDE)
}
else if (customerFile.exists()) {
project.ext.customerFileData = readCustomerFile(customerFile)
}
else {
throw new GradleException("Customer File is not provided! It is expected to be in CUSTOMER_FILE_OVERRIDE variable or in ${customerFile}")
}
}
}
(...other tasks and methods...)
I would like to test both method and task itself.
The 'prepareProject' task is quite lengthy in execution, but in 'real' setup it does magic necessary to set properties necessary for not only task above.
For testing I only want to e.g. set run readFiles task and validate results, making sure that either property on project was correctly set or exception was thrown.
I have looked into gradle test kit, but it is not what I need, as I was unable to find anything that would allow me to e.g. inspect project.
I have seen Guide for Testing Gradle Scripts, but this post is quite old and does not address my need / problem. I have also had a look at gradle docs Testing Build Logic with TestKit, but looking GradleRunner does not seem to offer any real inspection or project preparing abilities.
Plus, it would make us use jUnit, effectively adding whole classes structure only for testing purposes. Not clean and hard to maintain.
Googling gradle + test + task and other variations finds tons of ways of running xUnit tests, but that's not what I need here.
Summarizing, what I need is:
test gradle tasks and methods from build.gradle in separation (test kit will run task with all its dependencies, I don't want this)
prepare project before test run (test kit does not seem to allow this)
verify task / method output
Has anyone successfully done this?
Or am I approaching this in a wrong way?
I'm fairly new to gradle, searching for good options to test my build scripts.
I have created a simple Puppet 4 class and a unit test to go along with it as follows (after executing touch metadata.json; rspec-puppet-init while in modules/test/):
# modules/test/manifests/hello_world1.pp
class test::hello_world1 {
file { "/tmp/hello_world1":
content => "Hello, world!\n"
}
}
# modules/test/spec/classes/test__hello_world1_spec.rb
require 'spec_helper'
describe 'test::hello_world1' do
it { is_expected.to compile }
it { is_expected.to contain_file('/tmp/hello_world1')\
.with_content(/^Hello, world!$/) }
end
I can successfully run the unit test by executing rspec spec/classes/test__hello_world1_spec.rb while in modules/test/.
I would now like to proceed to a slightly more advanced class that uses code from another module, namely concat (the module has arleady been installed in modules/concat):
# modules/test/manifests/hello_world2.pp
class test::hello_world2
{
concat{ "/tmp/hello_world2":
ensure => present,
}
concat::fragment{ "/tmp/hello_world2_01":
target => "/tmp/hello_world2",
content => "Hello, world!\n",
order => '01',
}
}
# modules/test/spec/classes/test__hello_world2_spec.rb
require 'spec_helper'
describe 'test::hello_world2' do
it { is_expected.to compile }
# ...
end
When I attempt running this unit test with rspec spec/classes/test__hello_world2_spec.rb while in modules/test I receive an error message that includes:
Failure/Error: it { is_expected.to compile } error during compilation:
Evaluation Error: Error while evaluating a Resource Statement, Unknown
resource type: 'concat'
I suspect the root cause is that rspec cannot find the other module(s), because it has not been told a "modulepath".
My question is this: How exactly am I supposed to start unit tests, especially ones that require access to other modules?
Install the PDK for your platform from its download page. Re-create the module using pdk new module, and pdk new class, or by following the Guide.
Now, I come to what is probably the immediate problem in your code: your code depends on a Puppet Forge module, puppetlabs/concat but you haven't made it available. The PDK module template already has pre-configured puppetlabs_spec_helper to load fixtures for your module.
To tell puppetlabs_spec_helper to get it for you, you need a file .fixtures.yml with the following content:
fixtures:
forge_modules:
stdlib: puppetlabs/stdlib
concat: puppetlabs/concat
Note that you also need puppetlabs/stdlib, because that is a dependency of puppetlabs/concat.
If you want to explore more fixture possibilities, please refer to puppetlabs_spec_helper's docs.
With all of this in place, and integrating the code samples and test content you posted into the initial code skeletons provided by the PDLK, your tests will all pass now when you run:
$ pdk test unit
Note that I have written all about the underlying technologies, in a blog post, showing how to set up Rspec-puppet and more from scratch (ref), and it still appears to be the most up-to-date reference on this subject.
To read more about rspec-puppet in general, please refer to the official rspec-puppet docs site.
I need a good practice to deal with my issue.
The issue is: I need to run automatic tests against a site. The site has different configurations that completely change its design (on some pages). For example I can config 2 different pages of login. And I need to test them both.
First of all I must make sure that a correct test is run against a correct configuration. So before each test I need to change site's config. It is not good if I have a thousand of test.
So a solution that comes to my mind is to not reconfigure the site each time but do it once and run all the tests that are corresponding to this configuration. But this solution doesn't seems to me as an easy one to make.
For now what I did is: I created a method that is run once before all the other tests and in this method I configure the site to make config that are used in the majority of the tests. All the other tests for now change the config before execution and after execution they change it back. It's not good at all.
To do so I used NUnit3 SetUpFixture and OneTimeSetUp attributes:
/// <summary>
/// Runs once before all the test in order to config the environment
/// </summary>
[SetUpFixture]
public class ConfigTests
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
IWebDriver driver = new ChromeDriver();
try
{
//Here I config the stie
CommonActions actions = new CommonActions(driver);
actions.SwitchOffCombinedPaymentPage();
driver.Dispose();
}
catch (Exception)
{
driver.Dispose();
}
}
}
What I thought after this is that I'll be able to send parameters to SetUpFixture but first of all it's impossible and second of all it won't resolve the problem as this feature will just be run twice and the tests will be run against the last configuration.
So guys, how to deal with a site testing that has a lot of configurations?
I'd use a test run parameter from the command-line (or in the .runsettings file if you are using the VS adapter) Your SetUpFixture can grab that parameter and do the initialization and any individual fixtures that need it can grab it as well.
See the --params option to nunit3-console and the TestContext.TestParameters property for accessing the values.
This answers your "first of all it's impossible" part. I didn't answer "second of all... " because I don't understand it. I'll add more if you can clarify.
I have multiple packages under a subdirectory under src/,
running the tests for each package with go test is working fine.
When trying to run all tests with go test ./... the tests are running but it fails..
the tests are running against local database servers, each test file has global variables with db pointers.
I tried to run the tests with -parallel 1 to prevent contention in the db, but the tests still fail.
what can be the issue here?
EDIT: some tests are failing on missing DB entries, I completely clear the DB before and after each test. the only reason I can think of why this is happening is because of some contention between tests.
EDIT 2:
each one of my test files has 2 global variables (using mgo):
var session *mgo.Session
var db *mgo.Database
also it has the following setup and teardown functions:
func setUp() {
s, err := cfg.GetDBSession()
if err != nil {
panic(err)
}
session = s
db = cfg.GetDB(session)
db.DropDatabase()
}
func tearDown() {
db.DropDatabase()
session.Close()
}
each tests startup with setUp() and defer tearDown()
also cfg is:
package cfg
import (
"labix.org/v2/mgo"
)
func GetDBSession() (*mgo.Session, error) {
session, err := mgo.Dial("localhost")
return session, err
}
func GetDB(session *mgo.Session) *mgo.Database {
return session.DB("test_db")
}
EDIT 3:
I changed cfg to use a random database, the tests passed.
it seems that the tests from multiple packages are running somewhat in parallel.
is it possible to force go test to run everything sequentially across packages ?
Update: As pointed out by #Gal Ben-Haim, adding the (undocumented) go test -p 1 flag builds and tests all packages in serial. As put by the testflag usage message in the Go source code:
-p=n: build and test up to n packages in parallel
Old answer:
When running go test ./..., the tests of the different packages are in fact run in parallel, even if you set parallel=1 (only tests within a specific package are guaranteed to be run one at a time). If it is important that the packages be tested in sequence, like when there is database setup/teardown involved, it seems like the only way right now is to use the shell to emulate the behavior of go test ./..., and forcing the packages to be tested one by one.
Something like this, for example, works in Bash:
find . -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test
The command first lists all the subdirectories containing *.go files. Then it uses sort -u to list each subdirectory only once (removing duplicates). Finally all the subdirectories containing go files get fed to go test via xargs. The -P1 indicates that at most one command is to be run at a time.
Unfortunately, this is a lot uglier than just running go test ./..., but it might be acceptable if it is put into a shell script or aliased into a function that's more memorable:
function gotest(){ find $1 -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test; }
Now all tests can be run in the current directory by calling:
gotest .
apparently running go test -p 1 runs everything sequentially (including build), I haven't see this argument in go help test or go help testflag
I am assuming that because the packages individually pass that in this situation you are also dropping the DB before that test as well.
Therefore it sounds like the state of the DB for each package test is expected to be empty.
So between each set of the package tests the DB must be emptied. There are two ways around this, not knowing your entire situation I will briefly explain both options:
Option 1. Test Setup
Add an init() function to the start of each package _test file which you then put processing to remove the DB. This will be run before the init() method of the actual package:
func init() {
fmt.Println("INIT TEST")
// My test state initialization
// Remove database contents
}
Assuming that the package also had a similar print line you would see in the output (note the stdout output is only displayed when the a test fails or you supply the -v option)
INIT TEST
INIT PACKAGE
Option 2. Mock the database
Create a mock for the database (unless that is specifically what you are testing). The mock db can always act like the DB is blank for the starting state of each test.
Please try out the following github repository.
https://github.com/appleboy/golang-testing
Copy coverage.sh to /usr/local/bin/coverage and change permission.
$ curl -fsSL https://raw.githubusercontent.com/appleboy/golang-testing/master/coverage.sh /usr/local/bin/coverage
$ chmod +x /usr/local/bin/coverage
I am running the latest version of Codeception on a WAMP platform - My acceptance is very basic however works fine (see below):
$I = new WebGuy($scenario);
$I->wantTo('Log in to the website');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
In a nutshell - it checks the page is 'auth/login' fills out 2 form fields and clicks the login button. This works without any problems.
Here is my identical functional test:
$I = new TestGuy($scenario);
$I->wantTo('perform actions and see result');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
When I run this from the command line I get the following error (not the full error but enough to understand the problem):
1) Couldn't <-[35;1mperform actions and see result<-
[0m in <-[37;1LoginCept.php<-[0m <-41;37mRuntimeException:
Call to undefined method TestGuy::amOnPage<-[0m.......
My Acceptance suite has 'PhpBrowser' & 'WebHelper' modules enabled, the Functional suite has 'FileSystem' & 'TestHelper' enabled (within the acceptance.suite.yml & functional.suite.yml files)
Obviously the amOnPage() function is the problem - however I am led to believe amOnPage() should work in acceptance and functional test? Or I am wrong - also - can someone explain what the numbers mean e.g '<-[35;1m' that appear
UPDATE: I tried adding the 'WebHelper' module to the functional.suite.yml but I do not see the amOnPage() being auto-generated in the TestGuy.php file - any ideas?
My config files are below:
WebGuy
class_name: WebGuy
modules:
enabled:
- PhpBrowser
- WebHelper
config:
PhpBrowser:
url: 'http://v3.localhost/'
TestGuy
class_name: TestGuy
modules:
enabled: [Filesystem, TestHelper, WebHelper]
Well, this is so, because of TestGuy don't have those methods. All of those methods are in the PhpBrowser, Selenium2 modules or other that inherits from Codeception Mink implementation. So you need to add PhpBrowser in your functional suite in modules section, and then run codecept build command.
Also note that it is better to use Selenium2 module for acceptance test and PhpBrowser for functional tests. The main idea is that acceptance(Selenium2) tests must cover those part of your application, that can not be covered by functional (PhpBrowser) tests, for example some js-interactions.
About '<-[35;1m' start script codecept run --no-colors to remove '<-[35;1m' from console output