How to unit test gradle task? - unit-testing

I want to test logic of my build.gradle script.
Excerpt of the script would be:
(...other tasks and methods...)
def readCustomerFile(File file) {
def schema = <prepare schema>
def report = schema.validate(JsonLoader.fromFile(file))
if (!report.success) {
throw new GradleException("File is not valid! " + report.toString())
}
return new groovy.json.JsonSlurper().parse(file)
}
task readFiles {
mustRunAfter 'prepareCustomerProject'
doLast {
if (System.env.CUSTOMER_FILE_OVERRIDE) {
project.ext.customerFileData = readCustomerFile(System.env.CUSTOMER_FILE_OVERRIDE)
}
else if (customerFile.exists()) {
project.ext.customerFileData = readCustomerFile(customerFile)
}
else {
throw new GradleException("Customer File is not provided! It is expected to be in CUSTOMER_FILE_OVERRIDE variable or in ${customerFile}")
}
}
}
(...other tasks and methods...)
I would like to test both method and task itself.
The 'prepareProject' task is quite lengthy in execution, but in 'real' setup it does magic necessary to set properties necessary for not only task above.
For testing I only want to e.g. set run readFiles task and validate results, making sure that either property on project was correctly set or exception was thrown.
I have looked into gradle test kit, but it is not what I need, as I was unable to find anything that would allow me to e.g. inspect project.
I have seen Guide for Testing Gradle Scripts, but this post is quite old and does not address my need / problem. I have also had a look at gradle docs Testing Build Logic with TestKit, but looking GradleRunner does not seem to offer any real inspection or project preparing abilities.
Plus, it would make us use jUnit, effectively adding whole classes structure only for testing purposes. Not clean and hard to maintain.
Googling gradle + test + task and other variations finds tons of ways of running xUnit tests, but that's not what I need here.
Summarizing, what I need is:
test gradle tasks and methods from build.gradle in separation (test kit will run task with all its dependencies, I don't want this)
prepare project before test run (test kit does not seem to allow this)
verify task / method output
Has anyone successfully done this?
Or am I approaching this in a wrong way?
I'm fairly new to gradle, searching for good options to test my build scripts.

Related

How can I unit test a MassTransit consumer that builds and executes a routing slip?

In .NET Core 2.0 I have a fairly simple MassTransit routing slip that contains 2 activities. This is built and executed in a consumer and it all ties back to an automatonymous state machine. It all works great albeit with a few final clean tweaks needed.
However, I can't quite figure out the best way to write unit tests for my consumer as it builds a routing slip. I have the following code in my consumer:
public async Task Consumer(ConsumerContext<ProcessRequest> context)
{
var builder = new RoutingSlipBuilder(NewId.NextGuid());
SetupRoutingSlipActivities(builder, context);
var routingSlip = builder.Build();
await context.Execute(routingSlip).ConfigureAwait(false);
}
I created the SetupRoutingSlipActivities method as I thought it would help me write tests to make sure the right activities were being added and it simply looks like:
public void SetupRoutingSlipActivities(RoutingSlipBuilder builder, ConsumeContext<IProcessCreateLinkRequest> context)
{
builder.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
new ActivityOneArguments(
context.Message.Id,
context.Message.Name)
);
builder.AddActivity(
nameof(ActivityTwo),
new Uri("execute_activity_two_example_address"),
new ActivityTwoArguments(
context.Message.AnotherId,
context.Message.FileName)
);
}
I tried to just write tests for the SetupRoutingSlipActivities by using a Moq mock builder and a MassTransit InMemoryTestHarness but I found that the AddActivity method is not virtual so I can't verify it as such:
aRoutingSlipBuilder.Verify(x => x.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
It.Is<ActivityOne>(y => y.Id == 1 && y.Name == "A test name")));
Please ignore some of the weird data in the code examples as I just put up a simplified version.
Does anyone have any recommendations on how to do this? I also wanted to test to make sure the RoutingSlipBuilder was created but as that instance is created in the Consume method I wasn't sure how to do it! I've searched a lot online and through the MassTransit repo but nothing stood out.
Look at how the Courier tests are written, there are a number of test fixtures available to test routing slip activities. While they aren't well documented, the unit tests are a working testament to how the testing is used.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.Tests/Courier/TwoActivityEvent_Specs.cs

How to test Terraform files

I'm defining my infrastructure in Terraform files. I like Terraform a lot, but I'm having trouble figuring out how to test. I have awspec, which is really nice and runs RSpec-like tests against the result of your build via the AWS API. But is there a way to do unit tests, like on the results of terraform plan? What kind of workflow are others using with Terraform?
I'm going to expand on Begin's answer with more information about Kitchen-Terraform.
Kitchen-Terraform is a set of open source plugins that run within Test-Kitchen, these are supposed to go into your Terraform module repository to test that module's functionality before being used in a repository that creates the resources. Please feel free to check the documentation of those two projects for more details, but I will go through my recommendations for integration testing your Terraform code.
Install Ruby, Terraform
For this example, the Terraform module repo will be called: my_terraform_module
mkdir -p my_terraform_module
cd my_terraform_module
mkdir -p test/integration/kt_suite/controls \
test/fixtures/tf_module/
Create a Gemfile:
source "https://rubygems.org/" do
gem "kitchen-terraform"
end
Install the necessary components (uses the Gemfile for the dependencies of kitchen-terraform)
gem install bundler
bundle install
Create the Test-Kitchen file .kitchen.yml - this brings together the testing frame, Test-Kitchen and Kitchen-Terraform
---
driver:
name: terraform
root_module_directory: test/fixtures/tf_module
parallelism: 4
provisioner:
name: terraform
transport:
name: ssh
verifier:
name: terraform
groups:
- name: basic
controls:
- file_check
- state_file
platforms:
- name: terraform
suites:
- name: kt_suite
Your Terraform code should be at the root of the Terraform module repository such as:
my_terraform_module/
|-- main.tf
Example code that can go in main.tf
resource "null_resource" "create_file" {
provisioner "local-exec" {
command = "echo 'this is my first test' > foobar"
}
}
Then we reference the Terraform module just like we would in Terraform live repos - but in a test fixture instead in this file: test/fixtures/tf_module/main.tf
module "kt_test" {
source = "../../.."
}
Then from there, you can run Terraform apply, but it's done a little differently with Kitchen-Terraform and Test-Kitchen, you run a converge which helps keep track of state and a couple other items.
bundle exec kitchen converge
Now you've seen your Terraform code do an apply, we need to test it. We can test the actual resources that were created, which would be like an integration test, but we can also test the state file, which is a semi unit test, but I am not aware of anything that can currently do unit tests against the HCL code of Terraform.
Create an inspec default profile file: test/integration/kt_suite/inspec.yml
---
name: default
Create an Inspec control for your integration testing: test/integration/kt_suite/controls/basic.rb - I'm using a test for the example Terraform code I used earlier for the main.tf
# frozen_string_literal: true
control "file_check" do
describe file('.kitchen/kitchen-terraform/kt-suite-terraform/foobar') do
it { should exist }
end
end
And this is an example test of pulling information from the state file and testing if something exists in it. This is a basic one, but you can definitely exand on this example.
# frozen_string_literal: true
terraform_state = attribute "terraform_state", {}
control "state_file" do
describe "the Terraform state file" do
subject do json(terraform_state).terraform_version end
it "is accessible" do is_expected.to match /\d+\.\d+\.\d+/ end
end
end
Then run Inspec controls with Test-Kitchen and Kitchen-Terraform:
bundle exec kitchen verify
I took a lot of this from the getting started guide and some of the tutorials over here: https://newcontext-oss.github.io/kitchen-terraform/getting_started.html
We recently open sourced Terratest, our swiss army knife for testing infrastructure code.
Today, you're probably testing all your infrastructure code manually by deploying, validating, and undeploying. Terratest helps you automate this process:
Write tests in Go.
Use helpers in Terratest to execute your real IaC tools (e.g., Terraform, Packer, etc.) to deploy real infrastructure (e.g., servers) in a real environment (e.g., AWS).
Use helpers in Terratest to validate that the infrastructure works correctly in that environment by making HTTP requests, API calls, SSH connections, etc.
Use helpers in Terratest to undeploy everything at the end of the test.
Here's an example test for some Terraform code:
terraformOptions := &terraform.Options {
// The path to where your Terraform code is located
TerraformDir: "../examples/terraform-basic-example",
}
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// Run `terraform output` to get the value of an output variable
instanceUrl := terraform.Output(t, terraformOptions, "instance_url")
// Verify that we get back a 200 OK with the expected text
// It can take a minute or so for the Instance to boot up, so retry a few times
expected := "Hello, World"
maxRetries := 15
timeBetweenRetries := 5 * time.Second
http_helper.HttpGetWithRetry(t, instanceUrl, 200, expected, maxRetries, timeBetweenRetries)
These are integration tests, and depending on what you're testing, can take 5 - 50 minutes. It's not fast (though using Docker and test stages, you can speed some things up), and you'll have to work to make the tests reliable, but it is well worth the time.
Check out the Terratest repo for docs and lots of examples of various types of infrastructure code and the corresponding tests for them.
From my research this is a tough issue, since Terraform is not meant to be a full featured programming language and you are declaring what resources you want with Terraform, not how to build them, trying to unit-test doesn't really give you the assurance you are building resources how you'd like without actually running an apply. This makes attempts to unit-test feel more like a linting to me.
However, you could parse your HCL files with something like pyhcl, or parse you're plan files, however from my experience this was a lot of work for little benefit (but I could be missing an easier method!).
Here are some alternatives if you wanted to test the results of your terraform applys:
kitchen-terraform is a tool for writing Test Kitchen specs for your infrastructure.
kitchen-verifier-awspec helps bring together awspec and kitchen-terraform, although I have not used it personally.
If you are using AWS, I have found AWS Config to be able to provide a lot of the same benefits as other infrastructure testing tools, without as much setup/maintenance. Although it is fairly new, and I have not used it extensively.
Also if you are paying for Terraform Premium you get access to Sentinel, which seems to provide a lot of similar benefits to AWS Config, however I have not used it personally.
In addition to the answers, I will add my two cents. I was not very happy using GO lang with Terratest although it works perfectly well. It is just that GO is not my favorite programming language. I looked for some frameworks in Java and I found terraform-maven. At first glance, I only found examples in Groovy, but since Groovy run on JVM, it is feasible to implement the same examples in Java.
I translated part of the S3PreProvisionSpec.groovy to Java. It is testing this main.tf file.
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class S3PreProvisionTest {
private final String TF_CHANGE = "change";
private final String TF_AFTER = "after";
private final String TF_TAGS = "tags";
private final Map<String, String> mandatoryTags = Map.of(
"application_id", "cna",
"stack_name", "stacked",
"created_by", "f.gutierrez#yieldlab.de"
);
private Terraform terraform;
private TfPlan tfplan;
#BeforeAll
void setup() {
terraform = new Terraform().withRootDir("s3_pre_post_demo")
// .withProperties(Map.of("noColor", "true"))
;
tfplan = terraform.initAndPlan();
}
#AfterAll
void cleanup() {
terraform.destroy();
}
#Test
void mandatoryTagsForS3Resources() {
List<Map> s3Bucket = tfplan.getResourcesByType("aws_s3_bucket");
System.out.println("=========================");
s3Bucket.forEach(map -> {
Map tfChangeMap = (Map) map.get(TF_CHANGE);
Map tfAfterMap = (Map) tfChangeMap.get(TF_AFTER);
Map tfTagsMap = (Map) tfAfterMap.get(TF_TAGS);
assertEquals(3, tfTagsMap.size());
mandatoryTags.forEach((k, v) -> {
assertEquals(v, tfTagsMap.get(k));
});
try {
JSONObject jsonObject = new JSONObject(map);
JSONObject jsonChange = jsonObject.getJSONObject(TF_CHANGE);
JSONObject jsonAfter = jsonChange.getJSONObject(TF_AFTER);
JSONObject jsonTags = jsonAfter.getJSONObject(TF_TAGS);
System.out.println(">>>>>>>>>>>>>>>>>>>> " + jsonTags.toString());
mandatoryTags.forEach((k, v) -> {
try {
assertEquals(v, jsonTags.getString(k));
} catch (JSONException e) {
e.printStackTrace();
}
});
} catch (JSONException e) {
e.printStackTrace();
}
});
}
}
One approach is to output the results to a file using -out=tempfile, then run a script to validate whatever you're trying to do, and if all passes you can pass the file into the apply command.
look at -out here:
https://www.terraform.io/docs/commands/plan.html
You can use github.com/palantir/tfjson to parse a .plan file to json.
There is an issue at the moment that give a "unknown plan file version: 2" error. This is because the vendored version of terraform is too old.
The fix is:
go get github.com/palantir/tfjson
cd $GOPATH/src/github.com/palantir/tfjson
rm -rf vendor
go get -v ./...
There is then an error in ../../hashicorp/terraform/config/testing.go. To fix just change the line
t.Helper()
to
//t.Helper()
Run go get again and then go install
go get -v ./...
go install ./...
You should then be able to do the following which will produce json output.
terraform plan --out=terraform.plan
tfjson terraform.plan

How to config environment before running automated tests?

I need a good practice to deal with my issue.
The issue is: I need to run automatic tests against a site. The site has different configurations that completely change its design (on some pages). For example I can config 2 different pages of login. And I need to test them both.
First of all I must make sure that a correct test is run against a correct configuration. So before each test I need to change site's config. It is not good if I have a thousand of test.
So a solution that comes to my mind is to not reconfigure the site each time but do it once and run all the tests that are corresponding to this configuration. But this solution doesn't seems to me as an easy one to make.
For now what I did is: I created a method that is run once before all the other tests and in this method I configure the site to make config that are used in the majority of the tests. All the other tests for now change the config before execution and after execution they change it back. It's not good at all.
To do so I used NUnit3 SetUpFixture and OneTimeSetUp attributes:
/// <summary>
/// Runs once before all the test in order to config the environment
/// </summary>
[SetUpFixture]
public class ConfigTests
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
IWebDriver driver = new ChromeDriver();
try
{
//Here I config the stie
CommonActions actions = new CommonActions(driver);
actions.SwitchOffCombinedPaymentPage();
driver.Dispose();
}
catch (Exception)
{
driver.Dispose();
}
}
}
What I thought after this is that I'll be able to send parameters to SetUpFixture but first of all it's impossible and second of all it won't resolve the problem as this feature will just be run twice and the tests will be run against the last configuration.
So guys, how to deal with a site testing that has a lot of configurations?
I'd use a test run parameter from the command-line (or in the .runsettings file if you are using the VS adapter) Your SetUpFixture can grab that parameter and do the initialization and any individual fixtures that need it can grab it as well.
See the --params option to nunit3-console and the TestContext.TestParameters property for accessing the values.
This answers your "first of all it's impossible" part. I didn't answer "second of all... " because I don't understand it. I'll add more if you can clarify.

How does TeamCity know when an xUnit.net test is run?

I have always wondered how TeamCity recognizes that it is running xUnit.net tests and how it knows to put a separate "Test" tab in the build overview after a build step runs. Is the xUnit console runner somehow responsible for that?
Found finally what is actually going on. TeamCity has its own API. I dug this code snippet out of the xUnit source code and it becomes clear:
https://github.com/xunit/xunit/blob/v1/src/xunit.console/RunnerCallbacks/TeamCityRunnerCallback.cs
public override void AssemblyStart(TestAssembly testAssembly)
{
Console.WriteLine(
"##teamcity[testSuiteStarted name='{0}']",
Escape(Path.GetFileName(testAssembly.AssemblyFilename))
);
}
...code omitted for clarity

Test framework for component testing

I am looking for a test framework that suit my requirements. Following are the steps that I need to perform during automated testing:
SetUp (There are some input files, that needs to be read or copied into some specific folders.)
Execute (Run the stand alone)
Tear Down (Clean up to bring the system in its old state)
Apart from this I also want to have some intelligence to make sure if a .cc file changed, all the tests that can validate the changes should be run.
I am evaluating PyUnit, cppunit with scons for this. Thought of running this question to make sure I am on right direction. Can you suggest any other test framework tools? And what other requirements should be considered to select right test framework?
Try googletest AKA gTest it is no worse then any other unit test framework, but can as well beat some with the ease of use. Not exactly a tool for integration testing you are looking for, but can easily be applied in most cases. This wikipedia page might also be useful for you.
Here is a copy of a sample on the gTest project page:
#include <gtest/gtest.h>
namespace {
// The fixture for testing class Foo.
class FooTest : public ::testing::Test {
protected:
// You can remove any or all of the following functions if its body
// is empty.
FooTest() {
// You can do set-up work for each test here.
}
virtual ~FooTest() {
// You can do clean-up work that doesn't throw exceptions here.
}
// If the constructor and destructor are not enough for setting up
// and cleaning up each test, you can define the following methods:
virtual void SetUp() {
// Code here will be called immediately after the constructor (right
// before each test).
}
virtual void TearDown() {
// Code here will be called immediately after each test (right
// before the destructor).
}
// Objects declared here can be used by all tests in the test case for Foo.
};
// Tests that Foo does Xyz.
TEST_F(FooTest, DoesXyz) {
// Exercises the Xyz feature of Foo.
}
Scons could take care of building your .cc when they are changed, gTest can be used to setUp and tearDown your tests.
I can only add that we are using gTest in some cases, and a custom in-house test automation framework in almost all other. It is often a case with such tools that it might be easier to write your own than try to adjust and tweak some other to match your requirements.
One good option IMO, and it is something our test automation framework is moving towards, is using nosetests, coupled with a library of common routines (like start/stop services, get status of something, enable/disable logging in certain components etc.). This gives you a flexible system that is also fairly easy to use. And since it uses python and not C++ or something like that, more people can be busy creating test cases, including QEs, which not necessarily need to be able to write C++.
After reading this article http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle some time ago I went for CxxTest.
Once you have the thing set up (you need to install python for instance) it's pretty easy to write tests (I was completely new to unit tests)
I use it at work, integrated as a visual studio project in my solution. It produces a clickable output when a test fails, and the tests are built and run each time I build the solution.