Run a pre-commit hook only if all other hooks are successful - pre-commit.com

I have a pre-commit setup with several pretty standard repos (for a Python project anyways), and one heavily magical project-specific action.
Something like this:
repos:
- repo: https://github.com/timothycrosley/isort
...
- repo: https://github.com/psf/black
...
- repo: https://gitlab.com/pycqa/flake8
...
- repo: local
hooks:
- id: local_project_specific_magic
name: local-magic-script
entry: magic_script.sh
language: script
This all runs fine when all of the checks are successful.
What I need to achieve is to have the final local_project_specific_magic hook not execute if any of the previous hooks fail. Is this doable?
I have tried to add fail_fast: true and that seems to work, but it also prevents other hooks from running if any of them fail. For example, even if isort fixes some imports, I still want black to do its thing.

fail_fast: true is as close as you're going to get without significant surgery
you could imagine though that each other hook does something like:
entry: bash -c 'black "$#" || touch .fail' --
and then your script does something like if [ -f .fail ]; then echo 'some other hook failed' && exit 1; fi
you would also need an always_run: true hook at the beginning to make sure .fail doesn't exist as well (rm -f .fail)
but this all sounds like a big, unmaintainable hack. I suspect you have an XY problem as your requirement seems extremely strange -- perhaps elaborate on why you would want this setup?
disclaimer: I created pre-commit

I had a very similar task and the answer by #Anthony Sottile gave me a new idea.
My solution to this problem is to use the log_file keyword to generate a pre-commit.log. These file only exists if black or isort (or both) pre-commits fails.
My .pre-commit-config.yaml finaly look loke this:
repos:
- repo: local
hooks:
- id: cleaner
name: cleaner
entry: bash clean_logfiles.sh
language: system
- repo: https://github.com/ambv/black
hooks:
- id: black
log_file: pre-commit.log
- repo: https://gitlab.com/pycqa/flake8
hooks:
- id: flake8
log_file: pre-commit.log
- repo: local
hooks:
- id: local_project_specific_magic
name: local-magic-script
entry: bash magic_script.sh
language: system
In my local magic_script.sh I added on the top
if [ -f "pre-commit.log" ]; then
echo "Error in earlier pre-commit! We skip magic_script.sh."
exit 1
else
# some more magic_script.sh
fi
In this way I can select which pre-commits are important to me.
To avoid that the next run fails because of old log-files, I added the cleaner pre-commit, which just removes this old log-files.
The code is very basic and looks like this:
if [ -f "pre-commit.log" ]; then
echo "delete pre-commit.log"
rm "pre-commit.log"
fi

Related

Gulp + Sourcemaps + PostCSS(Autoprefixer + Minification via CSSnano)

Basically, I want sourcemaps available to my unminified and minifies flavors of my site.css file. I'd like my end result to be:
site.css
site.min.css
site.css.map
site.css.min.map
Currently, I only get:
site.css
site.min.css
site.css.min.map
I know my gulp script is wrong, but I don't know how to fix it. I need sourcemaps to write a sourcemap to site.css before site.min.css gets created. HALP!
and Thank you
gulp.task('scss', gulp.series('bootstrap:scss', function compileScss() {
return gulp.src(['./site/assets/scss/*.scss'])
.pipe(sourcemaps.init())
.pipe(sass.sync({
outputStyle: 'expanded'
}).on('error', sass.logError))
.pipe(gulp.dest('./site/dist/css')) // outputs site.css
.pipe(postcss([autoprefixer(), cssnano()
]))
.pipe(sourcemaps.write('.'))
.pipe(rename({
suffix: '.min'
}))
.pipe(gulp.dest('./site/dist/css')) //outputs site.min.css
}));
You only need sourcemaps for your unminified version, i would introduce a NODE_ENV for doing minification and sourcemaps, then use gulp-if to see if you're in development or production environment
Alternatively you could have separate build and dev tasks.
Using process.env.NODE_ENV means you can use them in postcss.config.js files etc. too
I found this really good because it shows you how you can use a gulp.babel.js file with Gulp 4. I was using 3.9.1 being reluctant to upgrade until this week but this helped immensely with understanding the changes from v3>4.

How to test Terraform files

I'm defining my infrastructure in Terraform files. I like Terraform a lot, but I'm having trouble figuring out how to test. I have awspec, which is really nice and runs RSpec-like tests against the result of your build via the AWS API. But is there a way to do unit tests, like on the results of terraform plan? What kind of workflow are others using with Terraform?
I'm going to expand on Begin's answer with more information about Kitchen-Terraform.
Kitchen-Terraform is a set of open source plugins that run within Test-Kitchen, these are supposed to go into your Terraform module repository to test that module's functionality before being used in a repository that creates the resources. Please feel free to check the documentation of those two projects for more details, but I will go through my recommendations for integration testing your Terraform code.
Install Ruby, Terraform
For this example, the Terraform module repo will be called: my_terraform_module
mkdir -p my_terraform_module
cd my_terraform_module
mkdir -p test/integration/kt_suite/controls \
test/fixtures/tf_module/
Create a Gemfile:
source "https://rubygems.org/" do
gem "kitchen-terraform"
end
Install the necessary components (uses the Gemfile for the dependencies of kitchen-terraform)
gem install bundler
bundle install
Create the Test-Kitchen file .kitchen.yml - this brings together the testing frame, Test-Kitchen and Kitchen-Terraform
---
driver:
name: terraform
root_module_directory: test/fixtures/tf_module
parallelism: 4
provisioner:
name: terraform
transport:
name: ssh
verifier:
name: terraform
groups:
- name: basic
controls:
- file_check
- state_file
platforms:
- name: terraform
suites:
- name: kt_suite
Your Terraform code should be at the root of the Terraform module repository such as:
my_terraform_module/
|-- main.tf
Example code that can go in main.tf
resource "null_resource" "create_file" {
provisioner "local-exec" {
command = "echo 'this is my first test' > foobar"
}
}
Then we reference the Terraform module just like we would in Terraform live repos - but in a test fixture instead in this file: test/fixtures/tf_module/main.tf
module "kt_test" {
source = "../../.."
}
Then from there, you can run Terraform apply, but it's done a little differently with Kitchen-Terraform and Test-Kitchen, you run a converge which helps keep track of state and a couple other items.
bundle exec kitchen converge
Now you've seen your Terraform code do an apply, we need to test it. We can test the actual resources that were created, which would be like an integration test, but we can also test the state file, which is a semi unit test, but I am not aware of anything that can currently do unit tests against the HCL code of Terraform.
Create an inspec default profile file: test/integration/kt_suite/inspec.yml
---
name: default
Create an Inspec control for your integration testing: test/integration/kt_suite/controls/basic.rb - I'm using a test for the example Terraform code I used earlier for the main.tf
# frozen_string_literal: true
control "file_check" do
describe file('.kitchen/kitchen-terraform/kt-suite-terraform/foobar') do
it { should exist }
end
end
And this is an example test of pulling information from the state file and testing if something exists in it. This is a basic one, but you can definitely exand on this example.
# frozen_string_literal: true
terraform_state = attribute "terraform_state", {}
control "state_file" do
describe "the Terraform state file" do
subject do json(terraform_state).terraform_version end
it "is accessible" do is_expected.to match /\d+\.\d+\.\d+/ end
end
end
Then run Inspec controls with Test-Kitchen and Kitchen-Terraform:
bundle exec kitchen verify
I took a lot of this from the getting started guide and some of the tutorials over here: https://newcontext-oss.github.io/kitchen-terraform/getting_started.html
We recently open sourced Terratest, our swiss army knife for testing infrastructure code.
Today, you're probably testing all your infrastructure code manually by deploying, validating, and undeploying. Terratest helps you automate this process:
Write tests in Go.
Use helpers in Terratest to execute your real IaC tools (e.g., Terraform, Packer, etc.) to deploy real infrastructure (e.g., servers) in a real environment (e.g., AWS).
Use helpers in Terratest to validate that the infrastructure works correctly in that environment by making HTTP requests, API calls, SSH connections, etc.
Use helpers in Terratest to undeploy everything at the end of the test.
Here's an example test for some Terraform code:
terraformOptions := &terraform.Options {
// The path to where your Terraform code is located
TerraformDir: "../examples/terraform-basic-example",
}
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// Run `terraform output` to get the value of an output variable
instanceUrl := terraform.Output(t, terraformOptions, "instance_url")
// Verify that we get back a 200 OK with the expected text
// It can take a minute or so for the Instance to boot up, so retry a few times
expected := "Hello, World"
maxRetries := 15
timeBetweenRetries := 5 * time.Second
http_helper.HttpGetWithRetry(t, instanceUrl, 200, expected, maxRetries, timeBetweenRetries)
These are integration tests, and depending on what you're testing, can take 5 - 50 minutes. It's not fast (though using Docker and test stages, you can speed some things up), and you'll have to work to make the tests reliable, but it is well worth the time.
Check out the Terratest repo for docs and lots of examples of various types of infrastructure code and the corresponding tests for them.
From my research this is a tough issue, since Terraform is not meant to be a full featured programming language and you are declaring what resources you want with Terraform, not how to build them, trying to unit-test doesn't really give you the assurance you are building resources how you'd like without actually running an apply. This makes attempts to unit-test feel more like a linting to me.
However, you could parse your HCL files with something like pyhcl, or parse you're plan files, however from my experience this was a lot of work for little benefit (but I could be missing an easier method!).
Here are some alternatives if you wanted to test the results of your terraform applys:
kitchen-terraform is a tool for writing Test Kitchen specs for your infrastructure.
kitchen-verifier-awspec helps bring together awspec and kitchen-terraform, although I have not used it personally.
If you are using AWS, I have found AWS Config to be able to provide a lot of the same benefits as other infrastructure testing tools, without as much setup/maintenance. Although it is fairly new, and I have not used it extensively.
Also if you are paying for Terraform Premium you get access to Sentinel, which seems to provide a lot of similar benefits to AWS Config, however I have not used it personally.
In addition to the answers, I will add my two cents. I was not very happy using GO lang with Terratest although it works perfectly well. It is just that GO is not my favorite programming language. I looked for some frameworks in Java and I found terraform-maven. At first glance, I only found examples in Groovy, but since Groovy run on JVM, it is feasible to implement the same examples in Java.
I translated part of the S3PreProvisionSpec.groovy to Java. It is testing this main.tf file.
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class S3PreProvisionTest {
private final String TF_CHANGE = "change";
private final String TF_AFTER = "after";
private final String TF_TAGS = "tags";
private final Map<String, String> mandatoryTags = Map.of(
"application_id", "cna",
"stack_name", "stacked",
"created_by", "f.gutierrez#yieldlab.de"
);
private Terraform terraform;
private TfPlan tfplan;
#BeforeAll
void setup() {
terraform = new Terraform().withRootDir("s3_pre_post_demo")
// .withProperties(Map.of("noColor", "true"))
;
tfplan = terraform.initAndPlan();
}
#AfterAll
void cleanup() {
terraform.destroy();
}
#Test
void mandatoryTagsForS3Resources() {
List<Map> s3Bucket = tfplan.getResourcesByType("aws_s3_bucket");
System.out.println("=========================");
s3Bucket.forEach(map -> {
Map tfChangeMap = (Map) map.get(TF_CHANGE);
Map tfAfterMap = (Map) tfChangeMap.get(TF_AFTER);
Map tfTagsMap = (Map) tfAfterMap.get(TF_TAGS);
assertEquals(3, tfTagsMap.size());
mandatoryTags.forEach((k, v) -> {
assertEquals(v, tfTagsMap.get(k));
});
try {
JSONObject jsonObject = new JSONObject(map);
JSONObject jsonChange = jsonObject.getJSONObject(TF_CHANGE);
JSONObject jsonAfter = jsonChange.getJSONObject(TF_AFTER);
JSONObject jsonTags = jsonAfter.getJSONObject(TF_TAGS);
System.out.println(">>>>>>>>>>>>>>>>>>>> " + jsonTags.toString());
mandatoryTags.forEach((k, v) -> {
try {
assertEquals(v, jsonTags.getString(k));
} catch (JSONException e) {
e.printStackTrace();
}
});
} catch (JSONException e) {
e.printStackTrace();
}
});
}
}
One approach is to output the results to a file using -out=tempfile, then run a script to validate whatever you're trying to do, and if all passes you can pass the file into the apply command.
look at -out here:
https://www.terraform.io/docs/commands/plan.html
You can use github.com/palantir/tfjson to parse a .plan file to json.
There is an issue at the moment that give a "unknown plan file version: 2" error. This is because the vendored version of terraform is too old.
The fix is:
go get github.com/palantir/tfjson
cd $GOPATH/src/github.com/palantir/tfjson
rm -rf vendor
go get -v ./...
There is then an error in ../../hashicorp/terraform/config/testing.go. To fix just change the line
t.Helper()
to
//t.Helper()
Run go get again and then go install
go get -v ./...
go install ./...
You should then be able to do the following which will produce json output.
terraform plan --out=terraform.plan
tfjson terraform.plan

Go: how to run tests for multiple packages?

I have multiple packages under a subdirectory under src/,
running the tests for each package with go test is working fine.
When trying to run all tests with go test ./... the tests are running but it fails..
the tests are running against local database servers, each test file has global variables with db pointers.
I tried to run the tests with -parallel 1 to prevent contention in the db, but the tests still fail.
what can be the issue here?
EDIT: some tests are failing on missing DB entries, I completely clear the DB before and after each test. the only reason I can think of why this is happening is because of some contention between tests.
EDIT 2:
each one of my test files has 2 global variables (using mgo):
var session *mgo.Session
var db *mgo.Database
also it has the following setup and teardown functions:
func setUp() {
s, err := cfg.GetDBSession()
if err != nil {
panic(err)
}
session = s
db = cfg.GetDB(session)
db.DropDatabase()
}
func tearDown() {
db.DropDatabase()
session.Close()
}
each tests startup with setUp() and defer tearDown()
also cfg is:
package cfg
import (
"labix.org/v2/mgo"
)
func GetDBSession() (*mgo.Session, error) {
session, err := mgo.Dial("localhost")
return session, err
}
func GetDB(session *mgo.Session) *mgo.Database {
return session.DB("test_db")
}
EDIT 3:
I changed cfg to use a random database, the tests passed.
it seems that the tests from multiple packages are running somewhat in parallel.
is it possible to force go test to run everything sequentially across packages ?
Update: As pointed out by #Gal Ben-Haim, adding the (undocumented) go test -p 1 flag builds and tests all packages in serial. As put by the testflag usage message in the Go source code:
-p=n: build and test up to n packages in parallel
Old answer:
When running go test ./..., the tests of the different packages are in fact run in parallel, even if you set parallel=1 (only tests within a specific package are guaranteed to be run one at a time). If it is important that the packages be tested in sequence, like when there is database setup/teardown involved, it seems like the only way right now is to use the shell to emulate the behavior of go test ./..., and forcing the packages to be tested one by one.
Something like this, for example, works in Bash:
find . -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test
The command first lists all the subdirectories containing *.go files. Then it uses sort -u to list each subdirectory only once (removing duplicates). Finally all the subdirectories containing go files get fed to go test via xargs. The -P1 indicates that at most one command is to be run at a time.
Unfortunately, this is a lot uglier than just running go test ./..., but it might be acceptable if it is put into a shell script or aliased into a function that's more memorable:
function gotest(){ find $1 -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test; }
Now all tests can be run in the current directory by calling:
gotest .
apparently running go test -p 1 runs everything sequentially (including build), I haven't see this argument in go help test or go help testflag
I am assuming that because the packages individually pass that in this situation you are also dropping the DB before that test as well.
Therefore it sounds like the state of the DB for each package test is expected to be empty.
So between each set of the package tests the DB must be emptied. There are two ways around this, not knowing your entire situation I will briefly explain both options:
Option 1. Test Setup
Add an init() function to the start of each package _test file which you then put processing to remove the DB. This will be run before the init() method of the actual package:
func init() {
fmt.Println("INIT TEST")
// My test state initialization
// Remove database contents
}
Assuming that the package also had a similar print line you would see in the output (note the stdout output is only displayed when the a test fails or you supply the -v option)
INIT TEST
INIT PACKAGE
Option 2. Mock the database
Create a mock for the database (unless that is specifically what you are testing). The mock db can always act like the DB is blank for the starting state of each test.
Please try out the following github repository.
https://github.com/appleboy/golang-testing
Copy coverage.sh to /usr/local/bin/coverage and change permission.
$ curl -fsSL https://raw.githubusercontent.com/appleboy/golang-testing/master/coverage.sh /usr/local/bin/coverage
$ chmod +x /usr/local/bin/coverage

Codeception - Acceptance tests work but Functional test don't

I am running the latest version of Codeception on a WAMP platform - My acceptance is very basic however works fine (see below):
$I = new WebGuy($scenario);
$I->wantTo('Log in to the website');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
In a nutshell - it checks the page is 'auth/login' fills out 2 form fields and clicks the login button. This works without any problems.
Here is my identical functional test:
$I = new TestGuy($scenario);
$I->wantTo('perform actions and see result');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
When I run this from the command line I get the following error (not the full error but enough to understand the problem):
1) Couldn't <-[35;1mperform actions and see result<-
[0m in <-[37;1LoginCept.php<-[0m <-41;37mRuntimeException:
Call to undefined method TestGuy::amOnPage<-[0m.......
My Acceptance suite has 'PhpBrowser' & 'WebHelper' modules enabled, the Functional suite has 'FileSystem' & 'TestHelper' enabled (within the acceptance.suite.yml & functional.suite.yml files)
Obviously the amOnPage() function is the problem - however I am led to believe amOnPage() should work in acceptance and functional test? Or I am wrong - also - can someone explain what the numbers mean e.g '<-[35;1m' that appear
UPDATE: I tried adding the 'WebHelper' module to the functional.suite.yml but I do not see the amOnPage() being auto-generated in the TestGuy.php file - any ideas?
My config files are below:
WebGuy
class_name: WebGuy
modules:
enabled:
- PhpBrowser
- WebHelper
config:
PhpBrowser:
url: 'http://v3.localhost/'
TestGuy
class_name: TestGuy
modules:
enabled: [Filesystem, TestHelper, WebHelper]
Well, this is so, because of TestGuy don't have those methods. All of those methods are in the PhpBrowser, Selenium2 modules or other that inherits from Codeception Mink implementation. So you need to add PhpBrowser in your functional suite in modules section, and then run codecept build command.
Also note that it is better to use Selenium2 module for acceptance test and PhpBrowser for functional tests. The main idea is that acceptance(Selenium2) tests must cover those part of your application, that can not be covered by functional (PhpBrowser) tests, for example some js-interactions.
About '<-[35;1m' start script codecept run --no-colors to remove '<-[35;1m' from console output

How do you run OpenERP yaml unit tests

I'm trying to run unit tests on my openERP module, but no matter what I write it doesnt show if the test passes or fails! Anyone know how to output the results of a test? (Using Windows OpenERP version 6.1)
My YAML test is:
-
I test the tests
-
!python {model: mymodelname}: |
assert False, "Testing False!"
assert True, "Testing True!"
The output when I reload the module with
openerp-server.exe --update mymodule --log-level=test -dtestdb
shows that the test ran but has no errors?!
... TEST testdb openerp.tools.yaml_import: I test the tests
What am I doing wrong?
Edit: ---------------------------------------------------------------------
Ok so after much fiddling with the !python, I tried out another test:
-
I test that the state
-
!assert {model: mymodel, id: mymodel_id}:
- state == 'badstate'
Which gave the expected failure:
WARNING demo_61 openerp.tools.yaml_import: Assertion "NONAME" FAILED
test: state == 'badstate'
values: ! active == badstate
So I'm guessing it is something wrong with my syntax which may work as expected in version 7.
Thanks for everyone's answers and help!
This is what I've tried. It seems to work for me:
!python {model: sale.order}: |
assert True, "Testing True!"
assert False, "Testing False!"
(Maybe you forgot the "|" character)
And then :
bin/start_openerp --init=your_module_to_test -d your_testing_database --test-file=/absolute/path/to/your/testing_file.yml
You might want to create your testing database before :
createdb mytestdb --encoding=unicode
Hope it helps you
UPDATE: Here are my logs ( I called my test file sale_order_line_test.yml)
ERROR mytestdb openerp.tools.yaml_import: AssertionError in Python code : Testing False!
mytestdb openerp.modules.loading: At least one test failed when loading the modules.
loading test file /path/to/module/test/sale_order_line_test.yml
AssertionError in Python code : Testing False!
Looking at the docs (e.g. here and here), I can't see anything obviously wrong with your code.
However, I'm not familiar with --log-level=test. Maybe try running it with the -v, --debug or --log-level=debug flags instead of --log-level=test? You may also need to try the uppercase variants for the --log-level argument, i.e. --log-level=DEBUG.
test certainly isn't one of the standard Python logging module's logging levels, and while I can't exclude the possibility of them adding a custom log level, I don't think that's the case.
It might also be worthwhile trying to remove the line obj = self.browse(cr, uid, ref("HP001")), just in case..
Try to type following path on your terminal when you start your server.
./openerp-server --addons-path=<..Path>...--test-enable
:Enable YAML and unit tests.
./openerp-server --addons-path=<..Path>...--test-commit
:Commit database changes performed by YAML or XML tests.
Try this in your terminal it will work.
./openerp-server --addons-path=<..Path> --log-level=test --test-enable
Hope This will help you.