Why is the "wrong" expression of an ifelse macro evaluated anyway? - if-statement

I wrote a small m4 script (test.m4) for testing purposes:
define(`test', `ifelse(`$#', `1', `$1', test(shift($#)))')
test(`arg1', `arg2')
and ran it with m4 test.m4 -t test -de1. The output was
m4trace: -1- test -> ifelse(`2', `1', `arg1', test(shift(`arg1',`arg2')))
m4trace: -2- test -> ifelse(`1', `1', `arg2', test(shift(`arg2')))
m4trace: -3- test -> ifelse(`1', `1', `', test(shift(`')))
m4trace: -4- test -> ifelse(`1', `1', `', test(shift(`')))
.
.
.
until execution was aborted due to an exceeded recursion limit. I wondered why this was so because actually 1 and 1 should compare equal and the if else macro should evaluate to `'.
However, I had the innovative idea to put the [not-equal] into quotation marks, so the macro looked like this:
define(`test', `ifelse(`$#', `1', `$1', `test(shift($#))')')
test(`arg1', `arg2')
and voilà, it worked like a charm (i.e., arg2 was printed out along with a leading newline).
The output (with the same invocation parameters):
NL
m4trace: -1- test -> ifelse(`2', `1', `arg1', `test(shift(`arg1',`arg2'))')
m4trace: -1- test -> ifelse(`1', `1', `arg2', `test(shift(`arg2'))')
arg2
(NL stands for "newline").
My conclusion: even though the two strings to compare are, in fact, equal, the preprocessor evaluates the [not-equal] branch nevertheless.
Does this have any specific purpose? IMO, it's just unintuitive. Or am I missing something?
1 -t test turns debug tracing for the macro test on. -de adds the definition of an invoked macro to the debugging output.

While the expressions are equal (besides quotes), their time of execution differs (owing to quotes).
In the first case test macro gets executed DURING macrosubstitution of the parent's test. So you experience a recursion: test inside test inside test and so on.
The second case makes the expression executed AFTERWARDS. So you don't have recursion. test after test after test.
This behaviour is very well described in the manual.
Section "16.3 Other incompatibilities":
In cases like this one, an interdiction for a macro to hold its own name would be a useless limitation. Of course, this leaves more
rope for the GNU m4 user to hang himself!

Related

When running "go test", is there a way to get a count of how many tests were run?

I'd like to get the number of tests that were run with go test, as kind of a checksum to detect if all the tests are running. Since Go relies on filenames and method names to determine what's a test, it's easy to mistype something, which would mean the test would silently be skipped.
I think that the gotestsum tool is close to what you are looking for.
It is a wrapper around go test that prints formatted test output and a summary of the test run.
Default go test:
go test ./...
? github.com/marco-m/timeit/cmd/sleepit [no test files]
ok github.com/marco-m/timeit/cmd/timeit 0.601s
Default gotestsum:
gotestsum
∅ cmd/sleepit
✓ cmd/timeit (cached)
DONE 11 tests in 0.273s <== see here
Checkout the documentation and the built-in help, it is well-written.
In my experience, gotestsum (and the other tools by the same organization) is good. For me, it is also very important to be able to use the standard Go test package, without other "test frameworks". gotestsum allows me to do so.
On the other hand, to really satisfy your requirement (print the number of declared tests and verify that that number is actually ran), you would need something like TAP, the Test Anything Protocol, which works for any programming language:
1..4 <== see here
ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file
not ok 4 - Summarized correctly # TODO Not written yet
TAP actually is very nice and simple. I remember there was a Go port, tap-go, but it is now marked as archived.

gtest: where to put gdb breakpoint

I am using various xunit tests for years (starting from cppunit in early 2000s). In all cases it was very easy to set a break point on failure: there was a function that indicated detected failure:
b 'atf::tests::tc::fail(std::string const&)'
command
up 1
end
It seems that gtest is quite different, what is the established practice of doing the same with gtest?
If you need to break at the start of the test to observe something, first get the symbol names present in the executable, and grep for the test name of interest, e.g.:
nm -C myclass_test | grep MyTest0
if you want to break at:
TEST(MainTest, MyTest0) {
EXPECT_EQ(1, 1);
}
Of the results of that grep, the most promising one seemed to be:
0000000000407c64 T MainTest_MyTest0_Test::TestBody()
and so:
gdb myclass_test
and:
b MainTest_MyTest0_Test::TestBody
r
and then this leaves me at the start of the desired test.
Tested with this setup at revision 2.
what is the established practice of doing the same with gtest?
Reading gtest.cc, the closest I see is --gunit_break_on_failure, which should cause the code to execute INT3 trap on x86/Linux, and to call DebugBreak on Windows.
Update: the flag appears to have been renamed to --gtest_break_on_failure in latest public releases.

'Assert Failed' message incomplete using CppUnit and TFS2015

Using: MSTest / CppUnit / TFS2015 / VS2013 / C++
I'm debugging a test that runs fine locally and fails on the build machine (which I don't have access to). This morning I sat down and was presented with almost all of my tests passing -- except one. The test happens to be comparing two rather large strings and the (usually) very helpful Assert failed. Expected:<... never made it to the Actual:<... part because the string was too long. It's just a simple: Assert::AreEqual(expectedStr, actualStr);.
Right now my workaround is to write a file to a network path that I have access to from within the test (which is already an integration type test luckily -- but still...). Oh -- and did I mention that I have to run a build that will take 40 minutes even if I set Clean Workspace to None in my build process parameters to even get the test to run? That's a whole other question for another post =/.
Is there a way to look at the full results of a test assertion failure (without, for example, a string comparison being cut off)? A test run log file maybe?
According to your description, you want to express assertion failure messages in C++. Check this case may help you:
"
A common solution for this problem is to create an assert macro. For an example see this question. The final form of their macro in that answer was the following:
#define dbgassert(EX,...) \
(void)((EX) || (realdbgassert (#EX, __FILE__, __LINE__, ## __VA_ARGS__),0))
In your case, the realdbgassert would be a function that prints any relevant information to stderr or other output console, and then calls the assert function itself. Depending on how much information you want, you could also do a stack dump, or log any other relevant information that will help you identify the issue. However, it can be as simple as passing a printf-esque format string, and relevant parameter value(s).
Note that if you compiler doesn't support variadic macros, you can create macros that take a specific number of parameters instead. This is slightly more cumbersome, but an option if your compiler lacks the support, eg:
#define dbgassert0(EX) \ ...
#define dbgassert1(EX,p0) \ ...
#define dbgassert2(EX,p0,p1) \ ...
"

OCAMLRUNPARAM does not affect stack size

I would like to change my stack size to allow a project with many non-tail-recursive functions to run on larger data. To do so, I tried to set OCAMLRUNPARAM="l=xxx" for varying values of xxx (in the range 0 through 10G), but it did not have any effect. Is setting OCAMLRUNPARAM even the right approach?
In case it is relevant: The project I am interested in is built using OCamlMakefile, target native-code.
Here is a minimal example where simply a large list is created without tail recursion. To quickly check whether the setting of OCAMLRUNPARAM has an effect, I compiled the program stacktest.ml:
let rec create l =
match l with
| 0 -> []
| _ -> "00"::(create (l-1))
let l = create (int_of_string (Sys.argv.(1)))
let _ = print_endline("List of size " ^ string_of_int (List.length l) ^ " created.")
using the command
ocamlbuild stacktest.native
and found out roughly at which length of the list a stack overflow occurs by (more or less) binary search with the following bash script foo.sh:
#!/bin/bash
export OCAMLRUNPARAM="l=$1"
increment=1000000
length=1
while [[ $increment > 0 ]] ; do
while [[ $(./stacktest.native $length) ]]; do
length=$(($length+$increment))
done
length=$(($length-$increment))
increment=$(($increment/2))
length=$(($length+$increment))
done
length=$(($length-$increment))
echo "Largest list without overflow: $length"
echo $OCAMLRUNPARAM
The results vary between runs of this script (and the intermediate results are not even consistent within one run, but let's ignore that for now), but they are similar no matter whether I call
bash foo.sh 1
or
bash foo.sh 1G
i.e. whether the stack size is set to 1 or 2^30 words.
Changing the stack limit via OCAMLRUNPARAM works only for bytecode executables, that are run by the OCaml interpreter. A native program is handled by an operating system and executed directly on CPU. Thus, in order to change the stack limit, you need to use facilities, provided by your operating system.
For example, on Linux there is the ulimit command that handles many process parameters, including the stack limit. Add the following to your script
ulimit -s $1
And you will see that the result is changing.

tools for testing vim plugins

I'm looking for some tools for testing vim scripts. Either vim scripts that
do unit/functional testing, or
classes for some other library (eg Python's unittest module) that make it convenient to
run vim with parameters that cause it to do some tests on its environment, and
determine from the output whether or not a given test passed.
I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful:
vim-unit:
purports "To provide vim scripts with a simple unit testing framework and tools"
first and only version (v0.1) was released in 2004
documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished".
unit-test.vim:
This one also seems pretty experimental, and may not be particularly reliable.
May have been abandoned or back-shelved: last commit was in 2009-11 (> 6 months ago)
No tagged revisions have been created (ie no releases)
So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.
vader.vim is easy, and amazing. It has no external dependencies (doesn't require ruby/rake), it's a pure vimscript plugin. Here's a fully specified test:
Given (description of test):
foo bar baz
Do (move around, insert some text):
2Wiab\<Enter>c
Expect:
foo bar ab
cbaz
If you have the test file open, you can run it like this:
:Vader %
Or you can point to the file path:
:Vader ./test.vader
I've had success using Andrew Radev's Vimrunner in conjunction with RSpec to both test Vim plugins and set them up on a continuous integration server.
In brief, Vimrunner uses Vim's client-server functionality to fire up a Vim server and then send remote commands so that you can inspect (and verify) the outcome. It's a Ruby gem so you'll need at least some familiarity with Ruby but if you put the time in then you get the full power of RSpec in order to write your tests.
For example, a file called spec/runspec.vim_spec.rb:
require "vimrunner"
require "fileutils"
describe "runspec.vim" do
before(:suite) do
VIM = Vimrunner.start_gui_vim
VIM.add_plugin(File.expand_path('../..', __FILE__), 'plugin/runspec.vim')
end
after(:all) do
VIM.kill
end
it "returns the current path if it ends in _test.rb" do
VIM.echo('runspec#SpecPath("foo_test.rb")').should == "foo_test.rb"
VIM.echo('runspec#SpecPath("bar/foo_test.rb")').should == "bar/foo_test.rb"
end
context "with a spec directory" do
before do
FileUtils.mkdir("spec")
end
after do
FileUtils.remove_entry_secure("spec")
end
it "finds a spec with the same name" do
FileUtils.touch("spec/foo_spec.rb")
VIM.echo('runspec#SpecPath("foo.rb")').should == "spec/foo_spec.rb"
end
end
end
I've written about it at length in "Testing Vim Plugins on Travis CI with RSpec and Vimrunner" if you want more detail.
There is another (pure Vimscript) UT plugin that I'm maintaining.
It is documented, it comes with several examples, and it is also used by my other plugins.
It aims at testing function results and buffer contents, and displaying the failures in the quickfix window. Exception callstacks are also decoded. AFAIK, it's the only plugin so far (or at least the first) that's meant to fill the quickfix window. Since then, I've added helper scripts to produce test results with rspec (+Vimrunner)
Since v2.0 (May 2020), the plugin can also test buffer content -- after it has been altered with mappings/snippets/.... Up until then I've been using other plugins. For instance, I used to test my C++ snippets (from lh-cpp) on travis with VimRunner+RSpec.
Regarding the syntax, for instance the following
Assert 1 > 2
Assert 1 > 0
Assert s:foo > s:Bar(g:var + 28) / strlen("foobar")
debug AssertTxt (s:foo > s:Bar(g:var+28)
\, s:foo." isn't bigger than s:Bar(".g:var."+28)")
AssertEquals!('a', 'a')
AssertDiffers('a', 'a')
let dict = {}
AssertIs(dict, dict)
AssertIsNot(dict, dict)
AssertMatch('abc', 'a')
AssertRelation(1, '<', 2)
AssertThrows 0 + [0]
would produce:
tests/lh/README.vim|| SUITE <[lh#UT] Demonstrate assertions in README>
tests/lh/README.vim|27 error| assertion failed: 1 > 2
tests/lh/README.vim|31 error| assertion failed: s:foo > s:Bar(g:var + 28) / strlen("foobar")
tests/lh/README.vim|33 error| assertion failed: -1 isn't bigger than s:Bar(5+28)
tests/lh/README.vim|37 error| assertion failed: 'a' is not different from 'a'
tests/lh/README.vim|40 error| assertion failed: {} is not identical to {}
Or, if we want to test buffer contents
silent! call lh#window#create_window_with('new') " work around possible E36
try
" :SetBufferContent a/file/name.txt
" or
SetBufferContent << trim EOF
1
3
2
EOF
%sort
" AssertBufferMatch a/file/NAME.txt
" or
AssertBufferMatch << trim EOF
1
4
3
EOF
finally
silent bw!
endtry
which results into
tests/lh/README.vim|78 error| assertion failed: Observed buffer does not match Expected reference:
|| ---
|| +++
|| ## -1,3 +1,3 ##
|| 1
|| -4
|| +2
|| 3
(hitting D in the quickfix window will open the produced result alongside the expected result in diff mode in a new tab)
I've used vim-unit before. At the very least it means you don't have to write your own AssertEquals and AssertTrue functions. It also has a nice feature that lets you run the current function, if it begins with "Test", by placing the cursor within the function body and typing :call VUAutoRun().
The documentation is a bit iffy and unfinished, but if you have experience with other XUnit testing libraries it won't be unfamiliar to you.
Neither of the script mentioned have ways to check for vim specific features - you can't change buffers and then check expectations on the result - so you will have to write your vimscript in a testable way. For example, pass strings into functions rather than pulling them out of buffers with getline() inside the function itself, return strings instead of using setline(), that sort of thing.
There is vim-vspec.
Your tests are written in vimscript and you can write them using a BDD-style (describe, it, expect, ...)
runtime! plugin/sandwich/function.vim
describe 'Adding Quotes'
it 'should insert "" in an empty buffer'
put! = ''
call SmartQuotes("'")
Expect getline(1) == "''"
Expect col('.') == 2
end
end
The GitHub has links to a video and an article to get you started:
A tutorial to use vim-vspec by Vimcasts.org [the video]
Introduce unit testing to Vim plugin development with vim-vspec [the article]
For functional testing, there's a tool called vroom. It has some limitations and can take seconds-to-minutes to get through thorough tests for a good size project, but it has a nice literate testing / documentation format with vim syntax highlighting support.
It's used to test the codefmt plugin and a few similar projects. You can check out the vroom/ dir there for examples.
Another few candidates:
VimBot - Similar to VimRunner in that it's written in Ruby and allows you to control a vim instance remotely. Is built to be used with the unit testing framework RSpec.
VimDriver - Same as VimBot except done in Python instead of Ruby (started as a direct port from VimBot) so you can use Python's unit testing framework if you're more familiar with that.