Cookbook: https://github.com/tkidd77/devops-project/tree/master/chef-repo/cookbooks/hello_world
My unit test:
it 'touches the correct file' do
expect { chef_run }.to touch_file ('C:\inetpub\wwwroot\iisstart.htm')
end
Output when running "chef exec rspec" in git bash:
tkidd#tkiddhome MINGW64 /c/git/project/chef-repo/cookbooks/hello_world (master) $ chef exec rspec .F
Failures:
1) hello_world::default touches the correct file
Failure/Error: expect { chef_run }.to touch_file ('C:\inetpub\wwwroot\iisstart.htm')
You must pass an argument rather than a block to use the provided matcher ( file "C:\inetpub\wwwroot\iisstart.htm"), or the matcher must implement `supports_block_expectations?`.
# ./spec/unit/recipes/default_spec.rb:38:in `block (2 levels) in <top (required)>'
Finished in 0.07199 seconds (files took 8.68 seconds to load) 2 examples, 1 failure
Failed examples:
rspec ./spec/unit/recipes/default_spec.rb:37 # hello_world::default touches the correct file
Here is the chefspec documentation on using the touch_file test: https://www.rubydoc.info/github/acrmp/chefspec/ChefSpec/API/FileMatchers which specifies using parenthesis instead of brackets around "chef-run", but when I do that, I receive an "undefined method" error:
tkidd#tkiddhome MINGW64 /c/git/project/chef-repo/cookbooks/hello_world (master) $ chef exec rspec .F
Failures:
1) hello_world::default touches the correct file
Failure/Error: expect (chef_run).to touch_file ('C:\inetpub\wwwroot\iisstart.htm')
NoMethodError:
undefined method `to' for #<ChefSpec::SoloRunner:0x0000000007a241d8>
# ./spec/unit/recipes/default_spec.rb:38:in `block (2 levels) in <top (required)>'
Finished in 0.04001 seconds (files took 4.92 seconds to load) 2 examples, 1 failure
Failed examples:
rspec ./spec/unit/recipes/default_spec.rb:37 # hello_world::default touches the correct file
According to this, rspec 3.0 expects a method instead of a block for the file path, but I don't understand what that would look like. How to check whether a variable is an instance of a module's subclass using rspec?
That should be expect(chef_run).to touch_file('C:\inetpub\wwwroot\iisstart.htm'). You only use expect { ... } with raise_error and things like it, most matches use expect(...) as a normal call. Also you had an extra space after touch_file. method (args) is not allowed in Ruby (or at least it is allowed and doesn't do what you think it does).
Solution:
describe 'hello_world::default' do
let :chef_run do
ChefSpec::SoloRunner.new(platform: 'windows', version: '2016').converge(described_recipe)
end
it 'touches the correct file' do
expect(chef_run).to touch_file('C:\inetpub\wwwroot\iisstart.htm')
end
end
Related
Summary of the problem
I've setup a repository with the checks I developed, designed to be used exclusively in an another repository.
The problem I've been facing is that when I run pre-commit run -a -v the checks don't go through every file, and time to time they even change!
A little more details
I've run the [identity][1] check and it prints every file in the repo, meaning the files are read by pre-commit (per my understanding)
When I execute pre-commit run the file in staging are correctly parsed
Why do i think that the checks don't run on every file?
I always print something to the standard output for every check, and when I run it verbosely it only prints 5 to 7 lines, meaning it checked only those.
If I try and edit a file (breaking a check) whose not on the short list, the check still passes through
Some code
The structure of the folder containing the files to be checked
./
articles/
some_name/
file.md
file.png
and_so_on.jpeg
team/
some_other_name/
file.md
propic.jpg
Summary of the .pre-commit-hooks.yaml
- id: team-check-name
name: blabla
description: blabla
entry: team-checkname
files: 'team/.*/'
types: [markdown]
language: python
- id: article-check-name
name: blabla
description: blabla
entry: article-checkname
files: 'articles/.*/'
types: [markdown]
language: python
Essentially the checks supposed to run on articles/.*/ start with article- and they all have this property: files: 'articles/.*/'.
Similarly every check supposed to run on team/.*/ starts with team- and they all have files: 'team/.*/'
Summary of the .pre-commit-config.yaml
repos:
- repo: https://github.com/repourl
rev: commit_hash
hooks:
- id: team-check-name
- id: article-check-name
But as you can see, I don't override any settings so it shouldn't interfere
the hook tools you've written are incorrect -- they only process sys.argv[1] rather than positional arguments
your code uses a lot of global variables so it's not a straightforward refactor -- typically you'd either use argparse to collect nargs='*' or loop over sys.argv[1:] (if you don't have any options)
as to why this is the convention -- it's very wasteful to start a linter process over and over to lint a single file (often times executable startup cost dwarfs the actual linting / formatting process)
disclaimer: I wrote pre-commit
I'm working on a TCL project, for which version control is based on Git. So, in order to produce good-quality code, I set up put execution of the tests in pre-commit hook.
However, even if they are executed (trace is shown in command-line), and one of the tests is failed, Git performs the commit. So I launched the hook manually to check the error code, and I figured out that it is null, explaining why Git does not stop:
$ .git/hooks/pre-commit
++++ FlattenResult-test PASSED
(...)
==== CheckF69F70 FAILED
==== Content of test case:
(...)
==== CheckF69F70 FAILED
$ echo $?
0
(Launching the tests script with tclsh also results in $? to be 0.)
So my question is about this last line: why is $? equal to 0, when one of the tcl tests is failed? And how can I achieve a simple pre-commit hook that stops on failure?
I read and reread the tcltest documentation, but saw no setting or information about this error code. And I would really like not to have to parse the tcl tests output, to check if ERROR or FAILED is present...
Edit: versions
TCL version : 8.5
tcltest version: 2.3.4
This depends on how you run your test suite. Normally you run a file called tests/all.tcl which may look something like this:
package require Tcl 8.6
package require tcltest 2.5
namespace import tcltest::*
configure -testdir [file dirname [file normalize [info script]]] {*}$argv
runAllTests
That final runAllTests returns a boolean indicating success (0) or failure (1). You can use that to generate an exit code by changing the last line to:
exit [runAllTests]
I use this redefinition in some of my test scripts:
# Exit non-zero if any tests fail.
# tcltest's `cleanupTests` resets the numTests array, so capture it first.
proc cleanupTests {} {
set failed [expr {$::tcltest::numTests(Failed) > 0}]
uplevel 1 ::tcltest::cleanupTests
if {$failed} then {exit 1}
}
After some research, I could make it work, even though several factors were against me:
I have to use an old TCL version (8.5) with tcltest version 2.3.4, in which runAllTests returns nothing;
I forgot to write cleanupTests at the end of test scripts, as the documentation is not really clear about its usage. (It is not clearer now. I just figured out it is needed if you want to get your tests run by runAllTests, which is really not obvious).
And here is my solution, mostly based on Hai's DevBits blog post:
all.tcl
package require tcltest
::tcltest::configure (...)
proc ::tcltest::cleanupTestsHook {} {
variable numTests
set ::exitCode [expr {$numTests(Total) == 0 || $numTests(Failed) > 0}]
}
::tcltest::runAllTests
exit $exitCode
Some thoughts about it:
I added $numTests(Total) == 0 as a failure condition: this means that no tests was found, which is clearly an erroneous condition;
This doesn't catch exceptions in the configuration of the tests, for instance a source command that points to a non-existing file, revealing some failure in tests scaffolding. This would be catched as error in other test framewords (ah, pytest, I miss you!)
We have a project managed in Gitlab, with CI pipeline for builds and tests (pytest, Google tests). Two or three of our test cases in Google tests fail. But Gitlab consider that the test stage is successful. Is it because the success percentage is more than 90% (an arbitrary value) ? Is there a way to make the stage (and thus the complete pipeline) fail if we don't get 100% of success ?
Here is a screenshot of the pipeline summary:
Here is the yml script of the stage:
test_unit_test:
stage: test
needs: ["build", "build_unit_test"]
image: $DOCKER_IMAGE
rules:
- if: '$CI_PIPELINE_SOURCE != "merge_request_event"'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script: |
ZIPNAME=`cat _VERSION_.txt`
./scripts/gitlab-ci/stage-unittests.sh test_unit_test_report.xml $ZIPNAME
artifacts:
reports:
junit: test_unit_test_report.xml
expire_in: 1 week
Thank you for any help.
Regards.
Gitlab CI/CD jobs don't care what the script areas are doing (so they don't look at, for example, test pass percentages). The only they things use to determine if a job passed or failed are exit codes and the allow_failure keyword.
After each command in the before_script, script, and after_script sections are executed, the Gitlab Runner checks the exit code of the command to see if it is 0. If it is non-zero, the command is considered a failure, and if the allow_failure keyword is not set to true for the job, the job fails.
So, for your job, even though the tests are failing, somehow the script is existing with exit code 0, meaning the command itself finished successfully. The command in this case is:
ZIPNAME=$(cat _VERSION_.txt)
./scripts/gitlab-ci/stage-unittests.sh test_unit_test_report.xml $ZIPNAME
NOTE: I replaced your backticks '`' with the $(command) syntax explained here which does the same thing (execute this command) but has some advantages over '`command`' including nesting and easier use in markdown where '`' indicates code formatting.
So, since you are calling a script (./scripts/gitlab-ci/stage-unittests.sh) to run your tests, that script itself is finishing successfully, so the job finishes successfully. Take a look at that script to see if you can tell why it finishes successfully even though the tests fail.
i just added the https://github.com/github/platform-samples/blob/master/pre-receive-hooks/require-jira-issue.sh
script to one of my github remote repos and was able to successfully configure a pre-receive hook at the org level and enabled it for one of my sample repos. Now when i push to that sample repo from local, it always results in the below error :-
remote: jira-commit-hook.sh: failed with exit status 1
remote: grep: Invalid range end
remote: ERROR
remote: ERROR: Your push was rejected because the commit
remote: ERROR: e9b0dd4695a51beb51e6fc1a8d16f01fa7dd13b8 in master
remote: ERROR: is missing the JIRA Issue
remote: ERROR:
remote: ERROR: Please fix the commit message and push again.
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://************'
The commit regex I'm using is msg_regex='[DST\-[0-9]+\]' since DST is the project key in our jira for one of the projects. All the commits i am pushing has the string DST-*** in their message where *** is a number and DST-*** is some actual issue key for the jira project here. Any idea why the remote server hook is rejecting the push. Looks like its not validating the regex. Any idea why?
- is a special character in regex when you use inside character class [], it stands for range when it appears anywhere else except
As the first character in class or after [^]
At the end of character class
so your regex should be
DST-[0-9]+
Character Class
My guess based on your original expression,
[DST\-[0-9]+\]
is that maybe the desired expression would be
\[DST-[0-9]+\]
or maybe just,
DST-[0-9]+
not sure though. I'm positive that you may not need to escape the - in this case, since it is not in a char class, - is only a metachar inside a char class [].
I'm looking for some tools for testing vim scripts. Either vim scripts that
do unit/functional testing, or
classes for some other library (eg Python's unittest module) that make it convenient to
run vim with parameters that cause it to do some tests on its environment, and
determine from the output whether or not a given test passed.
I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful:
vim-unit:
purports "To provide vim scripts with a simple unit testing framework and tools"
first and only version (v0.1) was released in 2004
documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished".
unit-test.vim:
This one also seems pretty experimental, and may not be particularly reliable.
May have been abandoned or back-shelved: last commit was in 2009-11 (> 6 months ago)
No tagged revisions have been created (ie no releases)
So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.
vader.vim is easy, and amazing. It has no external dependencies (doesn't require ruby/rake), it's a pure vimscript plugin. Here's a fully specified test:
Given (description of test):
foo bar baz
Do (move around, insert some text):
2Wiab\<Enter>c
Expect:
foo bar ab
cbaz
If you have the test file open, you can run it like this:
:Vader %
Or you can point to the file path:
:Vader ./test.vader
I've had success using Andrew Radev's Vimrunner in conjunction with RSpec to both test Vim plugins and set them up on a continuous integration server.
In brief, Vimrunner uses Vim's client-server functionality to fire up a Vim server and then send remote commands so that you can inspect (and verify) the outcome. It's a Ruby gem so you'll need at least some familiarity with Ruby but if you put the time in then you get the full power of RSpec in order to write your tests.
For example, a file called spec/runspec.vim_spec.rb:
require "vimrunner"
require "fileutils"
describe "runspec.vim" do
before(:suite) do
VIM = Vimrunner.start_gui_vim
VIM.add_plugin(File.expand_path('../..', __FILE__), 'plugin/runspec.vim')
end
after(:all) do
VIM.kill
end
it "returns the current path if it ends in _test.rb" do
VIM.echo('runspec#SpecPath("foo_test.rb")').should == "foo_test.rb"
VIM.echo('runspec#SpecPath("bar/foo_test.rb")').should == "bar/foo_test.rb"
end
context "with a spec directory" do
before do
FileUtils.mkdir("spec")
end
after do
FileUtils.remove_entry_secure("spec")
end
it "finds a spec with the same name" do
FileUtils.touch("spec/foo_spec.rb")
VIM.echo('runspec#SpecPath("foo.rb")').should == "spec/foo_spec.rb"
end
end
end
I've written about it at length in "Testing Vim Plugins on Travis CI with RSpec and Vimrunner" if you want more detail.
There is another (pure Vimscript) UT plugin that I'm maintaining.
It is documented, it comes with several examples, and it is also used by my other plugins.
It aims at testing function results and buffer contents, and displaying the failures in the quickfix window. Exception callstacks are also decoded. AFAIK, it's the only plugin so far (or at least the first) that's meant to fill the quickfix window. Since then, I've added helper scripts to produce test results with rspec (+Vimrunner)
Since v2.0 (May 2020), the plugin can also test buffer content -- after it has been altered with mappings/snippets/.... Up until then I've been using other plugins. For instance, I used to test my C++ snippets (from lh-cpp) on travis with VimRunner+RSpec.
Regarding the syntax, for instance the following
Assert 1 > 2
Assert 1 > 0
Assert s:foo > s:Bar(g:var + 28) / strlen("foobar")
debug AssertTxt (s:foo > s:Bar(g:var+28)
\, s:foo." isn't bigger than s:Bar(".g:var."+28)")
AssertEquals!('a', 'a')
AssertDiffers('a', 'a')
let dict = {}
AssertIs(dict, dict)
AssertIsNot(dict, dict)
AssertMatch('abc', 'a')
AssertRelation(1, '<', 2)
AssertThrows 0 + [0]
would produce:
tests/lh/README.vim|| SUITE <[lh#UT] Demonstrate assertions in README>
tests/lh/README.vim|27 error| assertion failed: 1 > 2
tests/lh/README.vim|31 error| assertion failed: s:foo > s:Bar(g:var + 28) / strlen("foobar")
tests/lh/README.vim|33 error| assertion failed: -1 isn't bigger than s:Bar(5+28)
tests/lh/README.vim|37 error| assertion failed: 'a' is not different from 'a'
tests/lh/README.vim|40 error| assertion failed: {} is not identical to {}
Or, if we want to test buffer contents
silent! call lh#window#create_window_with('new') " work around possible E36
try
" :SetBufferContent a/file/name.txt
" or
SetBufferContent << trim EOF
1
3
2
EOF
%sort
" AssertBufferMatch a/file/NAME.txt
" or
AssertBufferMatch << trim EOF
1
4
3
EOF
finally
silent bw!
endtry
which results into
tests/lh/README.vim|78 error| assertion failed: Observed buffer does not match Expected reference:
|| ---
|| +++
|| ## -1,3 +1,3 ##
|| 1
|| -4
|| +2
|| 3
(hitting D in the quickfix window will open the produced result alongside the expected result in diff mode in a new tab)
I've used vim-unit before. At the very least it means you don't have to write your own AssertEquals and AssertTrue functions. It also has a nice feature that lets you run the current function, if it begins with "Test", by placing the cursor within the function body and typing :call VUAutoRun().
The documentation is a bit iffy and unfinished, but if you have experience with other XUnit testing libraries it won't be unfamiliar to you.
Neither of the script mentioned have ways to check for vim specific features - you can't change buffers and then check expectations on the result - so you will have to write your vimscript in a testable way. For example, pass strings into functions rather than pulling them out of buffers with getline() inside the function itself, return strings instead of using setline(), that sort of thing.
There is vim-vspec.
Your tests are written in vimscript and you can write them using a BDD-style (describe, it, expect, ...)
runtime! plugin/sandwich/function.vim
describe 'Adding Quotes'
it 'should insert "" in an empty buffer'
put! = ''
call SmartQuotes("'")
Expect getline(1) == "''"
Expect col('.') == 2
end
end
The GitHub has links to a video and an article to get you started:
A tutorial to use vim-vspec by Vimcasts.org [the video]
Introduce unit testing to Vim plugin development with vim-vspec [the article]
For functional testing, there's a tool called vroom. It has some limitations and can take seconds-to-minutes to get through thorough tests for a good size project, but it has a nice literate testing / documentation format with vim syntax highlighting support.
It's used to test the codefmt plugin and a few similar projects. You can check out the vroom/ dir there for examples.
Another few candidates:
VimBot - Similar to VimRunner in that it's written in Ruby and allows you to control a vim instance remotely. Is built to be used with the unit testing framework RSpec.
VimDriver - Same as VimBot except done in Python instead of Ruby (started as a direct port from VimBot) so you can use Python's unit testing framework if you're more familiar with that.