'Assert Failed' message incomplete using CppUnit and TFS2015 - c++

Using: MSTest / CppUnit / TFS2015 / VS2013 / C++
I'm debugging a test that runs fine locally and fails on the build machine (which I don't have access to). This morning I sat down and was presented with almost all of my tests passing -- except one. The test happens to be comparing two rather large strings and the (usually) very helpful Assert failed. Expected:<... never made it to the Actual:<... part because the string was too long. It's just a simple: Assert::AreEqual(expectedStr, actualStr);.
Right now my workaround is to write a file to a network path that I have access to from within the test (which is already an integration type test luckily -- but still...). Oh -- and did I mention that I have to run a build that will take 40 minutes even if I set Clean Workspace to None in my build process parameters to even get the test to run? That's a whole other question for another post =/.
Is there a way to look at the full results of a test assertion failure (without, for example, a string comparison being cut off)? A test run log file maybe?

According to your description, you want to express assertion failure messages in C++. Check this case may help you:
"
A common solution for this problem is to create an assert macro. For an example see this question. The final form of their macro in that answer was the following:
#define dbgassert(EX,...) \
(void)((EX) || (realdbgassert (#EX, __FILE__, __LINE__, ## __VA_ARGS__),0))
In your case, the realdbgassert would be a function that prints any relevant information to stderr or other output console, and then calls the assert function itself. Depending on how much information you want, you could also do a stack dump, or log any other relevant information that will help you identify the issue. However, it can be as simple as passing a printf-esque format string, and relevant parameter value(s).
Note that if you compiler doesn't support variadic macros, you can create macros that take a specific number of parameters instead. This is slightly more cumbersome, but an option if your compiler lacks the support, eg:
#define dbgassert0(EX) \ ...
#define dbgassert1(EX,p0) \ ...
#define dbgassert2(EX,p0,p1) \ ...
"

Related

Can I capture simulator output to console in my testbench?

I have a testbench to test my VHDL device (DUT), but part of the DUT debug output is an ASSERT/REPORT message to the console, which I would like to check for correctness but I can't change the DUT. The only way I can think of is to post-process the output log file.
Is there any way of capturing the console output in the testbench, so I can check the DUT output directly?
I do this as part of the testbench. However, rather than Assert, I use OSVVM alerts, log, and print. OSVVM is both at osvvm.org and github.
Rather than Assert, I use AffirmIf for self-checking/result checking. I use AlertIf for parameter checking.
Step 1 is get OSVVM. Once you have the code, compile it using the script. In either Mentor or Aldec, run the script by doing:
vlib osvvm
vmap osvvm osvvm
do $PATH_TO_OSVVM/osvvm.do $PATH_TO_OSVVM
Use VHDL-2008 and include all of OSVVM in your program by doing:
library osvvm;
context osvvm.OsvvmContext;
Then rather than:
assert Data /= expected report "..." severity error;
Do:
AffirmIf(Data = Expected, "...") ;
Both assert and AffirmIf/AlertIf print. However, the advantage to AffirmIf/AlertIf is that internally it keeps a count of the errors and you can get a pass fail at the end of your test by doing:
ReportAlerts;
The next advantage of OSVVM AffirmIf/AlertIf/Log/Print is that if you want the results in a file, you simply do:
TranscriptOpen("./results/Test1.txt");
If you want to both print to the screen and a file, also do:
SetTranscriptMirror(TRUE);
That ought get you started. I will leave the rest to the user guides. Start by looking at both the AlertLog package user guide and the transcript package user guide.

Fastest way to make console output "verbose" or not

I am making a small system and I want to be able to toggle "verbose" text output in the whole system.
I have made a file called globals.h:
namespace REBr{
extern bool console_verbose = false;
}
If this is true I want all my classes to print a message to the console when they are constructing, destructing, copying or doing pretty much anything.
For example:
window(string title="",int width=1280,int height=720):
Width(width),Height(height),title(title)
{
if(console_verbose){
std::cout<<"Generating window #"<<this->instanceCounter;
std::cout<<"-";
}
this->window=SDL_CreateWindow(title.c_str(),0,0,width,height,SDL_WINDOW_OPENGL);
if(console_verbose)
std::cout<<"-";
if(this->window)
{
this->glcontext = SDL_GL_CreateContext(window);
if(console_verbose)
std::cout<<".";
if(this->glcontext==NULL)
{
std::cout<<"FATAL ERROR IN REBr::WINDOW::CONSTR_OPENGLCONTEXT: "<<SDL_GetError()<<std::endl;
}
}
else std::cout<<"FATAL ERROR IN REBr::WINDOW::CONSTR_WINDOW: "<<SDL_GetError()<<std::endl;
if(console_verbose)
std::cout<<">done!"<<endl;
}
Now as you can see I have a lot of ifs in that constructor. And I REALLY dont want that since that will slow down my application. I need this to be as fast as possible without removing the "loading bar" (this helps me determine at which function the program stopped functioning).
What is the best/fastest way to accomplish this?
Everying in my system is under the namespace REBr
Some variants to achieve that:
Use some logger library. It is the best option as it gives you maximum flexibility and some useful experience ;) And you haven't to devise something. For example, look at Google GLOG.
Define some macro, allowing you to turn on/off all these logs by changing only the macro. But it isn't so easy to write such marco correctly.
Mark your conditional flag as constexpr. That way you may switch the flag and, depending on its value, compiler will optimise ifs in compiled program. But ifs will still be in code, so it looks kinda bulky.
Anyway, all these options require program recompilation. W/o recompilation it is impossible to achieve the maximum speed.
I often use a Logger class that supports debug levels. A call might look like:
logger->Log(debugLevel, "%s %s %d %d", timestamp, msg, value1, value2);
The Logger class supports multiple debug levels so that I can fine tune the debug output. This can be set at any time through the command line or with a debugger. The Log statement uses a variable length argument list much like printf.
Google's logging module is widely used in the industry and supports logging levels that you can set from the command line. For example (taken from their documentation)
VLOG(1) << "I'm printed when you run the program with --v=1 or higher";
VLOG(2) << "I'm printed when you run the program with --v=2 or higher";
You can find the code here https://github.com/google/glog and the documentation in the doc/ folder.

gcc gprof/gcov/other - how to get sequence of function calls/exits + control flow statements

BACKGROUND
We have testers for our embedded GUI product and when a tester declares "test failed", sometimes it's hard for us developers to reproduce the exact problem because we don't have the exact trace of what happened.
We do currently have a logging framework but us devs have to manually input those logging statements in the code which is fine . . . except when a hard-to-reproduce bug occurs and we didn't have a log statement at the 'right' location and then when we re-build, re-run the test with the same steps, we get a different result.
ISSUE
We would like a solution wherein the compiler produces extra instrumentation code that allows us to see the exact sequence of events including, at the minimum:
function enter/exit (already provided by -finstrument-functions)
control-flow statement enter i.e. entering if/else, which case statement we jumped to
The log would look like this:
int main() entered
if-line 5 entered
else-line 10 entered
void EventLoop() entered
. . .
Some additional nice-to-haves are
Parameter values on function entry AND exit (for pass-by-reference types)
Function return value
QUESTION
Are there any gcc tools or even paid tools that can do this instrumentation automatically?
You can either use gdb for that, and you can automate that (I've got a tool for that in the works, you find it here or you can try to use gcov.
The gcov implementation is so, that it loads the latest gcov data when you start the program. But: you can manually dump and load the data. If you call __gcov_flush it will dump the current data and reset the current state. However: if you do that several times it will always merge the data, so you'd also need to rename the gcov data file then.

How to properly debug a binary generated by `go test -c` using GDB?

The go test command has support for the -c flag, described as follows:
-c Compile the test binary to pkg.test but do not run it.
(Where pkg is the last element of the package's import path.)
As far as I understand, generating a binary like this is the way to run it interactively using GDB. However, since the test binary is created by combining the source and test files temporarily in some /tmp/ directory, this is what happens when I run list in gdb:
Loading Go Runtime support.
(gdb) list
42 github.com/<username>/<project>/_test/_testmain.go: No such file or directory.
This means I cannot happily inspect the Go source code in GDB like I'm used to. I know it is possible to force the temporary directory to stay by passing the -work flag to the go test command, but then it is still a huge hassle since the binary is not created in that directory and such. I was wondering if anyone found a clean solution to this problem.
Go 1.5 has been released, and there is still no officially sanctioned Go debugger. I haven't had much success using GDB for effectively debugging Go programs or test binaries. However, I have had success using Delve, a non-official debugger that is still undergoing development: https://github.com/derekparker/delve
To run your test code in the debugger, simply install delve:
go get -u github.com/derekparker/delve/cmd/dlv
... and then start the tests in the debugger from within your workspace:
dlv test
From the debugger prompt, you can single-step, set breakpoints, etc.
Give it a whirl!
Unfortunately, this appears to be a known issue that's not going to be fixed. See this discussion:
https://groups.google.com/forum/#!topic/golang-nuts/nIA09gp3eNU
I've seen two solutions to this problem.
1) create a .gdbinit file with a set substitute-path command to
redirect gdb to the actual location of the source. This file could be
generated by the go tool but you'd risk overwriting someone's custom
.gdbinit file and would tie the go tool to gdb which seems like a bad
idea.
2) Replace the source file paths in the executable (which are pointing
to /tmp/...) with the location they reside on disk. This is
straightforward if the real path is shorter then the /tmp/... path.
This would likely require additional support from the compiler /
linker to make this solution more generic.
It spawned this issue on the Go Google Code issue tracker, to which the decision ended up being:
https://code.google.com/p/go/issues/detail?id=2881
This is annoying, but it is the least of many annoying possibilities.
As a rule, the go tool should not be scribbling in the source
directories, which might not even be writable, and it shouldn't be
leaving files elsewhere after it exits. There is next to nothing
interesting in _testmain.go. People testing with gdb can break on
testing.Main instead.
Russ
Status: Unfortunate
So, in short, it sucks, and while you can work around it and GDB a test executable, the development team is unlikely to make it as easy as it could be for you.
I'm still new to the golang game but for what it's worth basic debugging seems to work.
The list command you're trying to work can be used so long as you're already at a breakpoint somewhere in your code. For example:
(gdb) b aws.go:54
Breakpoint 1 at 0x61841: file /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go, line 54.
(gdb) r
Starting program: /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.test
[snip: some arnings about BinaryCache]
Breakpoint 1, github.com/stellar/deliverator/aws.imageIsNewer (latest=0xc2081fe2d0, ami=0xc2081fe3c0, ~r2=false)
at /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go:54
54 layout := "2006-01-02T15:04:05.000Z"
(gdb) list
49 func imageIsNewer(latest *ec2.Image, ami *ec2.Image) bool {
50 if latest == nil {
51 return true
52 }
53
54 layout := "2006-01-02T15:04:05.000Z"
55
56 amiCreationTime, amiErr := time.Parse(layout, *ami.CreationDate)
57 if amiErr != nil {
58 panic(amiErr)
This is just after running the following in the aws subdir of my project:
go test -c
gdb aws.test
As an additional caveat, it does seem very selective about where breakpoints can be placed. Seems like it has to be an expression but that conclusion is only via experimentation.
If you're willing to use tools besides GDB, check out godebug. To use it, first install with:
go get github.com/mailgun/godebug
Next, insert a breakpoint somewhere by adding the following statement to your code:
_ = "breakpoint"
Now run your tests with the godebug test command.
godebug test
It supports many of the parameters from the go test command.
-test.bench string
regular expression per path component to select benchmarks to run
-test.benchmem
print memory allocations for benchmarks
-test.benchtime duration
approximate run time for each benchmark (default 1s)
-test.blockprofile string
write a goroutine blocking profile to the named file after execution
-test.blockprofilerate int
if >= 0, calls runtime.SetBlockProfileRate() (default 1)
-test.count n
run tests and benchmarks n times (default 1)
-test.coverprofile string
write a coverage profile to the named file after execution
-test.cpu string
comma-separated list of number of CPUs to use for each test
-test.cpuprofile string
write a cpu profile to the named file during execution
-test.memprofile string
write a memory profile to the named file after execution
-test.memprofilerate int
if >=0, sets runtime.MemProfileRate
-test.outputdir string
directory in which to write profiles
-test.parallel int
maximum test parallelism (default 4)
-test.run string
regular expression to select tests and examples to run
-test.short
run smaller test suite to save time
-test.timeout duration
if positive, sets an aggregate time limit for all tests
-test.trace string
write an execution trace to the named file after execution
-test.v
verbose: print additional output

tools for testing vim plugins

I'm looking for some tools for testing vim scripts. Either vim scripts that
do unit/functional testing, or
classes for some other library (eg Python's unittest module) that make it convenient to
run vim with parameters that cause it to do some tests on its environment, and
determine from the output whether or not a given test passed.
I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful:
vim-unit:
purports "To provide vim scripts with a simple unit testing framework and tools"
first and only version (v0.1) was released in 2004
documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished".
unit-test.vim:
This one also seems pretty experimental, and may not be particularly reliable.
May have been abandoned or back-shelved: last commit was in 2009-11 (> 6 months ago)
No tagged revisions have been created (ie no releases)
So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.
vader.vim is easy, and amazing. It has no external dependencies (doesn't require ruby/rake), it's a pure vimscript plugin. Here's a fully specified test:
Given (description of test):
foo bar baz
Do (move around, insert some text):
2Wiab\<Enter>c
Expect:
foo bar ab
cbaz
If you have the test file open, you can run it like this:
:Vader %
Or you can point to the file path:
:Vader ./test.vader
I've had success using Andrew Radev's Vimrunner in conjunction with RSpec to both test Vim plugins and set them up on a continuous integration server.
In brief, Vimrunner uses Vim's client-server functionality to fire up a Vim server and then send remote commands so that you can inspect (and verify) the outcome. It's a Ruby gem so you'll need at least some familiarity with Ruby but if you put the time in then you get the full power of RSpec in order to write your tests.
For example, a file called spec/runspec.vim_spec.rb:
require "vimrunner"
require "fileutils"
describe "runspec.vim" do
before(:suite) do
VIM = Vimrunner.start_gui_vim
VIM.add_plugin(File.expand_path('../..', __FILE__), 'plugin/runspec.vim')
end
after(:all) do
VIM.kill
end
it "returns the current path if it ends in _test.rb" do
VIM.echo('runspec#SpecPath("foo_test.rb")').should == "foo_test.rb"
VIM.echo('runspec#SpecPath("bar/foo_test.rb")').should == "bar/foo_test.rb"
end
context "with a spec directory" do
before do
FileUtils.mkdir("spec")
end
after do
FileUtils.remove_entry_secure("spec")
end
it "finds a spec with the same name" do
FileUtils.touch("spec/foo_spec.rb")
VIM.echo('runspec#SpecPath("foo.rb")').should == "spec/foo_spec.rb"
end
end
end
I've written about it at length in "Testing Vim Plugins on Travis CI with RSpec and Vimrunner" if you want more detail.
There is another (pure Vimscript) UT plugin that I'm maintaining.
It is documented, it comes with several examples, and it is also used by my other plugins.
It aims at testing function results and buffer contents, and displaying the failures in the quickfix window. Exception callstacks are also decoded. AFAIK, it's the only plugin so far (or at least the first) that's meant to fill the quickfix window. Since then, I've added helper scripts to produce test results with rspec (+Vimrunner)
Since v2.0 (May 2020), the plugin can also test buffer content -- after it has been altered with mappings/snippets/.... Up until then I've been using other plugins. For instance, I used to test my C++ snippets (from lh-cpp) on travis with VimRunner+RSpec.
Regarding the syntax, for instance the following
Assert 1 > 2
Assert 1 > 0
Assert s:foo > s:Bar(g:var + 28) / strlen("foobar")
debug AssertTxt (s:foo > s:Bar(g:var+28)
\, s:foo." isn't bigger than s:Bar(".g:var."+28)")
AssertEquals!('a', 'a')
AssertDiffers('a', 'a')
let dict = {}
AssertIs(dict, dict)
AssertIsNot(dict, dict)
AssertMatch('abc', 'a')
AssertRelation(1, '<', 2)
AssertThrows 0 + [0]
would produce:
tests/lh/README.vim|| SUITE <[lh#UT] Demonstrate assertions in README>
tests/lh/README.vim|27 error| assertion failed: 1 > 2
tests/lh/README.vim|31 error| assertion failed: s:foo > s:Bar(g:var + 28) / strlen("foobar")
tests/lh/README.vim|33 error| assertion failed: -1 isn't bigger than s:Bar(5+28)
tests/lh/README.vim|37 error| assertion failed: 'a' is not different from 'a'
tests/lh/README.vim|40 error| assertion failed: {} is not identical to {}
Or, if we want to test buffer contents
silent! call lh#window#create_window_with('new') " work around possible E36
try
" :SetBufferContent a/file/name.txt
" or
SetBufferContent << trim EOF
1
3
2
EOF
%sort
" AssertBufferMatch a/file/NAME.txt
" or
AssertBufferMatch << trim EOF
1
4
3
EOF
finally
silent bw!
endtry
which results into
tests/lh/README.vim|78 error| assertion failed: Observed buffer does not match Expected reference:
|| ---
|| +++
|| ## -1,3 +1,3 ##
|| 1
|| -4
|| +2
|| 3
(hitting D in the quickfix window will open the produced result alongside the expected result in diff mode in a new tab)
I've used vim-unit before. At the very least it means you don't have to write your own AssertEquals and AssertTrue functions. It also has a nice feature that lets you run the current function, if it begins with "Test", by placing the cursor within the function body and typing :call VUAutoRun().
The documentation is a bit iffy and unfinished, but if you have experience with other XUnit testing libraries it won't be unfamiliar to you.
Neither of the script mentioned have ways to check for vim specific features - you can't change buffers and then check expectations on the result - so you will have to write your vimscript in a testable way. For example, pass strings into functions rather than pulling them out of buffers with getline() inside the function itself, return strings instead of using setline(), that sort of thing.
There is vim-vspec.
Your tests are written in vimscript and you can write them using a BDD-style (describe, it, expect, ...)
runtime! plugin/sandwich/function.vim
describe 'Adding Quotes'
it 'should insert "" in an empty buffer'
put! = ''
call SmartQuotes("'")
Expect getline(1) == "''"
Expect col('.') == 2
end
end
The GitHub has links to a video and an article to get you started:
A tutorial to use vim-vspec by Vimcasts.org [the video]
Introduce unit testing to Vim plugin development with vim-vspec [the article]
For functional testing, there's a tool called vroom. It has some limitations and can take seconds-to-minutes to get through thorough tests for a good size project, but it has a nice literate testing / documentation format with vim syntax highlighting support.
It's used to test the codefmt plugin and a few similar projects. You can check out the vroom/ dir there for examples.
Another few candidates:
VimBot - Similar to VimRunner in that it's written in Ruby and allows you to control a vim instance remotely. Is built to be used with the unit testing framework RSpec.
VimDriver - Same as VimBot except done in Python instead of Ruby (started as a direct port from VimBot) so you can use Python's unit testing framework if you're more familiar with that.