Just started using ScalaTest and I quite like it.
By just reading the docs I have thus far been unable to figure out whether there is any substantial difference between the can, should and must clauses for a FlatSpec.
In particular, I'm wondering whether a must failure is treated any differently from a should one - or it's just "syntactic sugar" to make the tests better self-documented.
should and must are the same semantically. But it's not about better documentation, it's basically just down to personal stylistic preference (I prefer must for example).
can is a little different. You can't (nomen omen) use it directly as a matcher, it's only available in a test descriptor. Quote from FlatSpec:
Note: you can use must or can as well as should in a FlatSpec. For
example, instead of it should "pop..., you could write it must "pop...
or it can "pop....
(the same applies for WordSpec and the two corresponding fixture classes)
Note that for a short time (in ScalaTest 2.0.x I think), the use of must was deprecated, however, in 2.1.0, the decision has been reverted:
Resurrected MustMatchers in package org.scalatest. Changed deprecation
warning for org.scalatest.matchers.MustMatchers to suggest using
org.scalatest.MustMatchers instead of org.scalatest.Matchers, which
was the suggestion in 2.0. Apologies to must users who migrated to
should already when upgrading to 2.0.
Related
So, I have to admit upfront I already know the answer. I'm asking so others who happen to run into the same problem can find a solution to the problem caused by incorrect documentation.
My environment is VS 2015 C++, wxWidgets 3.0.2, developing on Windows 7.
In some legacy code, calls to wxMkDir were not being checked for success. According to wxWidgets documentation, wxMkDir has a return type of bool, and returns true if successful. However, it returns 0 when successful.
Why?
The answer is two-fold: there are two functions with similar names, wxMkdir and wxMkDir, with the former being documented and the latter not documented. The second part is that the seemingly valid presumption that they will behave the same is not a valid assumption.
The undocumented function wxMkDir maps to wxCRT_MkDir, which in turn maps to wxCRT_MkDirA, then to wxPOSIX_IDENT(mkdir), which creates a platform dependent name for the mentioned POSIX function, mkdir. According to the POSIX documentation for mkdir
Upon successful completion, mkdir() shall return 0. Otherwise, -1 shall be returned, no directory shall be created, and errno shall be set to indicate the error.
So, conditionals like:
if (!wxMkDir(newDir)) {
// handle the error here
}
will fail, but:
if (wxMkDir(newDir) != 0) {
// handle the error here
}
will work as anticipated based on whether the directory was created or not.
The documented function wxMkdir is implemented in wx source file filefn.cpp, and utilizes mkdir, but with conditionals like the above to map to the appropriate bool return value.
wxMkdir() and wxMkDir() are the unfortunate and ugly exceptions to the general rule that wxWidgets provides wxFoo() wrapper for all standard (meaning either ANSI C or POSIX as, in practice, the latter is about as standard and more so than C99) functions foo() existing in both narrow (char*) and wide (wchar_t*) versions.
So, according to this general rule, you'd expect wxMkdir() to behave as std::mkdir() but unfortunately wxMkdir() predated, by quite a few years, Unicode-ification of wxWidgets and so this rule couldn't be implemented for it because of backwards compatibility and another function had to be invented to be just a wrapper for std::mkdir().
And by now, of course, the weight of backwards compatibility is even heavier and there really doesn't seem to be anything reasonable to do here -- other than advising people to use wxFileName::Mkdir() which is unambiguous.
</sad-story>
We have several C++ functions that will be implemented in phase 2 of our project that are part of the public interface or their respective classes and modules. Because they are part of the public interface, we think they should be present, at least in the headers, during phase 1 so that we are still thinking about them as we implement the rest of the classes. However, since they are unimplemented, we want no one to call them. We would like this check to occur at compile time, to ensure correctness.
My desires are:
Compile time (could be an error or warning; warnings are better because they are more flexible - we can selectively turn them off)
Works on G++4.8.1 and doesn't kill the build under Visual Studio 2013 (we use Visual Studio/VisualAssistX only as an editor but the refactoring tools don't work without building)
Not too hard to understand what was done and why
Functions are present in class documentation (we can include some \warning not implemented in phase 1 notation for doxygen to pick up)
I am considering three options:
A belt and suspenders approach of marking them as deprecated (which will generate a warning) and throwing a custom exception - this is almost what I want except the compiler warning that it is "deprecated" is opposite of the real situation: a deprecated method works now but won't work later; this method will work later but does not work now
Another answer tells how to forbid using a function while still having it exist - this is good but unreadable and hard to search for. Plus, it is a compile-time error - we can't just let some functions call it if we change our minds - it is all or nothing. And making every unimplemented function a template makes me wonder if the trick will always work. For example, virtual functions can't be templates.
Just putting them in as a comment - Keeps people from calling them, but they also don't show up in auto-generated documentation (and we can't decide later to have selective calling)
Is there a better way? And if not is there a reason to prefer the template or comment options over the deprecated option?
As alternative:
You can just declare them without definition, so you get link error.
You may then provide a library not_yet_implemented with empty definition to allow the premature usage of these functions.
or
Mark you method deleted: = delete, eventually by wrapping that in a macro
#define NOT_YET_IMPLEMENTED = delete
Following Boost.Spirit compiler examples I am migrating my Flex/Bison based calculator-like grammar to Spirit based. I want to add a feature #include<another_input.inp>. I have defined the include_statement grammar successfully. Should I follow the way error handling was doing: on_success(include_statement, annotation_function(...)), i.e. for each successful matching of include_statement, get the new input file name and call phrase_parse() again ? or like the Flex/Bison to push/pop the input stack?
Thanks.
Guessing, from the little information that is here, that you meant to ask whether you can reuse the same grammar instance, or it should be better to instantiate a new instance to parse the includes, it depends.
You can do both.
When the grammar is stateless (hint: it usually is if you can use it const) there's no difference. Otherwise, prefer to instantiate a separate instance.
However,
the point is somewhat moot since it appears you already decided to parse the includes after parsing the main document (if I get your comment right)
there's always the danger of global state; Even if the grammar object is const, you could potentially modify external state (e.g. using phx::ref from semantic action) so, this would be an issue, regardless of whether you used separate grammar instances.
The task
I am trying to work out how best to add C++0x's override identifier to all existing methods that are already overrides in a large body of C++ code, without doing it manually.
(We have many, many hundreds of thousands of lines of code, and doing it manually would be a complete non-starter.)
Current idea
Our coding standards say that we should add the virtual keyword against all implicitly virtual methods in derived classes, even though strictly unnecessary (to aid comprehension).
So if I were to script the addition myself, I'd write a script that read all our headers, found all functions beginning with virtual, and insert override before the following semi-colon. Then compile it on a compiler that supports override, and fix all the errors in base classes.
But I'd really much rather not use this home-grown way, as:
it's obviously going to be tedious and error-prone.
not everyone has remembered, every time, to add the virtual keyword, so this method would miss out some existing overrides
Is there an existing tool?
So, is there already a tool that parses C++ code, detects existing methods that overrides, and appends override to their declarations?
(I am aware of static analysis tools such as PC-lint that warn about functions that look like they should be overrides. What I'm after is something that would actually munge our code, so that future errors in overrides will be detected at compiler-time, rather than later on in static analysis)
(In case anyone is tempted to point out that C++03 doesn't support 'override'... In practice, I'd be adding a macro, rather than the actual "override" identifier, to use our code on older compilers that don't support this feature. So after the identifier was added, I'd run a separate script to replace it with whatever macro we're going to use...)
Thanks in advance...
There is a tool under development by the LLVM project called "cpp11-migrate" which currently has the following features:
convert loops to range-based for loops
convert null pointer constants (like NULL or 0) to C++11 nullptr
replace the type specifier in variable declarations with the auto type specifier
add the override specifier to applicable member functions
This tool is documented here and should be released as part of clang 3.3.
However, you can download the source and build it yourself today.
Edit
Some more info:
Status of the C++11 Migrator - a blog post, dated 2013-04-15
cpp11-migrate User’s Manual
Edit 2: 2013-09-07
"cpp11-migrate" has been renamed to "clang-modernize". For windows users, it is now included in the new LLVM Snapshot Builds.
Edit 3: 2020-10-07
"clang-modernize" has bee renamed to "Clang-Tidy".
Our DMS Software Reengineering Toolkit with its C++11-capable C++ Front End can do this.
DMS is a general purpose program transformation system for arbitrary programming languages; the C++ front end allows it to process C++. DMS parses, builds ASTs and symbol tables that are accurate (this is hard to do for C++), provides support for querying properties of the AST nodes and trees, allows procedural and source-to-source transformations on the tree. After all changes are made, the modified tree can be regenerated with comments retained.
Your problem requires that you find derived virtual methods and change them. A DMS source-to-source transformation rule to do that would look something like:
source domain Cpp. -- tells DMS the following rules are for C++
rule insert_virtual_keyword (n:identifier, a: arguments, s: statements):
method_declaration -> method_declaration " =
" void \n(\a) { \s } " -> " virtual void \n(\a) { \s }"
if is_implicitly_virtual(n).
Such rules match against the syntax trees, so they can't mismatch to a comment, string, or whatever. The funny quotes are not C++ string quotes; they are meta-quotes to allow the rule language to know that what is inside them has to be treated as target language ("Cpp") syntax. The backslashes are escapes from the target language text, allowing matches to arbitrary structures e.g., \a indicates a need for an "a", which is defined to be the syntactic category "arguments".
You'd need more rules to handle cases where the function returns a non-void result, etc. but you shouldn't need a lot of them.
The fun part is implementing the predicate (returning TRUE or FALSE) controlling application of the transformation: is_implicitly_virtual. This predicate takes (an abstract syntax tree for) the method name n.
This predicate would consult the full C++ symbol table to determine what n really is. We already know it is a method from just its syntactic setting, but we want to know in what class context.
The symbol table provides the linkage between the method and class, and the symbol table information for the class tells us what the class inherits from, and for those classes, which methods they contain and how they are declared, eventually leading to the discovery (or not) that the parent class method is virtual. The code to do this has to be implemented as procedural code going against the C++ symbol table API. However, all the hard work is done; the symbol table is correct and contains references to all the other data needed. (If you don't have this information, you can't possibly decide algorithmically, and any code changes will likely be erroneous).
DMS has been used to carry out massive changes on C++ code in the past using program transformations.(Check the Papers page at the web site for C++ rearchitecting topics).
(I'm not a C++ expert, merely the DMS architect, so if I have minor detail wrong, please forgive.)
I did something like this a few months ago with about 3 MB worth of code and while you say that "doing it manually would be a complete non-starter," I think it is the only way. The reason is that you should be applying the override keyword to the prototypes that are intended to override base class methods. Any tool that adds it will put it on the prototypes that actually override base class methods. The compiler already knows which methods those are so adding the keyword doesn't change anything. (Please note that I am not terribly familiar with the new standard and I am assuming the override keyword is optional. Visual Studio has supported override since at least VS2005.)
I used a search for "virtual" in the header files to find most of them and I still occasionally find another prototype that is missing the override keyword.
I found two bugs by going through that.
Eclipse CDT has a working C++ parser and semantic utilities. The latest version IIRC also has markers for overriding methods.
It wouldn't require much code to write a plug-in which would base on that and rewrite the code to contain the override tags where appropriate.
one option is to
Enable suggest-override compiler warning And then write a script
which can insert override keyword to location pointed by the emitted warnings
One of the problems of C++ are horrible error messages that we are getting from code which intensively uses templates and template metaprogramming. The concepts are designed to solve this problem, but unfortunately they will not be in the next standard.
I'm wondering, is this problem common for all languages, which are supporting generic programming? Or something is wrong with C++ templates?
Unfortunately I don't know any other language, that supports generic programming (Java and C# generics are too simplified and not as powerful as C++ templates).
So I'm asking you guys: are D,Ada,Eiffel templates (generics) producing such ugly error messages too? And Is it possible to have language with powerful generic programming paradigm, but without ugly error messages? And if yes, how these languages are solving this problem ?
Edit: for downvoters. I really love C++ and templates. I'm not saying that templates are bad. Actually I'm a big fan of generic programming and template metaprogramming. I'm just asking why I'm getting such ugly error messages from compilers.
In general I found Ada compiler error messages for generics really not significantly more difficult to read than any other Ada compiler error messages.
C++ template error messages, on the other hand, are notorious for being error novels. The main difference I think is the way C++ does template instantiation. The thing is, C++ templates are much more flexible than Ada generics. It is so flexible, it is almost like a macro preprocessor. The clever folks in Boost have used this to implement things like lambdas and even whole other languages.
Because of that flexibility, the entire template hierarchy basically has to be compiled anew every time its particular permutation of template parameters is first encountered. Thus issues that resolve down to incompatibilities several layers down a API end up being presented to the poor API client to decipher.
In Ada, Generics are actually strongly typed, and provide full information hiding to the client, just like normal packages and subroutines do. So if you do get an error message, it is typically just referencing the one generic you are trying to instatiate, not the entire hierarchy used to implement it.
So yes, C++ template error messages are way worse than Ada's.
Now debugging is a different story entirely...
The problem, at heart, is that error recovery is difficult, whatever the context.
And when you factor in C and C++ horrid grammars, you can only wonder that error messages are not worse than that! I am afraid that the C grammar has been designed by people who didn't have a clue about the essential properties of a grammar, one of them being that the less reliance on the context the better and the other being that you should strive to make it as unambiguous as possible.
Let us illustrate a common error: forgetting a semi-colon.
struct CType {
int a;
char b;
}
foo
bar() { /**/ }
Okay so this is wrong, where should the missing semi-colon go ? Well unfortunately it's ambiguous, it can go either before or after foo because:
C considers it normal to declare a variable in stride after defining a struct
C considers it normal not to specify a return type for a function (in which case it defaults to int)
If we reason about, we could see that:
if foo names a type, then it belongs to the function declaration
if not, it probably denotes a variable... unless of course we made a typo and it was meant to be written fool, which happens to be a type :/
As you can see, error recovery is downright difficult, because we need to infer what the writer meant, and the grammar is far from being receptive. It is not impossible though, and most errors can indeed be diagnosed more or less correctly, and even recovered from... it just takes considerable effort.
It seems that people working on gcc are more interested in producing fast code (and I mean fast, search for the latest benchmarks on gcc 4.6) and adding interesting features (gcc already implement most - if not all - of C++0x) than producing easy to read error messages. Can you blame them ? I can't.
Fortunately there are people who think that accurate error reporting and good error recovery are a very worthy goal, and some of those have been working on CLang for quite a bit, and they are continuing to do so.
Some nice features, off the top of my head:
Terse but complete error messages, which include the source ranges to expose exactly where the error emanated from
Fix-It notes when it's obvious what was meant
In which case the compiler parses the rest of the file as if the fix had been there already, instead of spewing lines upon lines of gibberish
(recent) avoid including the include stack for notes, to cut out on the cruft
(recent) trying only to expose the template parameter types that the developper actually wrote, and preserving typedefs (thus talking about std::vector<Name> instead of std::vector<std::basic_string<char, std::allocator<char>>, std::allocator<std::basic_string<char, std::allocator<char>> > which makes all the difference)
(recent) recovering correctly in case of a missing template in case it's missing in a call to a template method from within another template method
But each of those has required several hours to days of work.
They certainly didn't come for free.
Now, concepts should have (normally) made our lives easier. But they were mostly untested and so it was deemed preferable to remove them from the draft. I must say I am glad for this. Given C++ relative inertia, it's better not to include features that haven't been thoroughly revised, and the concept maps didn't really thrilled me. Neither did they thrilled Bjarne or Herb it seems, as they said that they would be rethinking Concepts from scratch for the next standard.
The article Generic Programming outlines many of the pros and cons of generics in several languages, including Ada in particular. Although lacking template specialization, all Ada generic instances are "equivalent to the instance declaration…immediately followed by the instance body". As a practical matter, error messages tend to occur at compile-time, and they typically represent familiar violations of type-safety.
D has two features to improve the quality of template error messages: Constraints and static assert.
// Use constraints to only allow a function to operate on random access
// ranges as defined in std.range. If something that doesn't satisfy this
// is passed, the compiler will error before even trying to instantiate
// fun().
void fun(R)(R range) if(isRandomAccessRange!(R)) {
// Do stuff.
}
// Use static assert to check a high level invariant. If
// the predicate is false, the error message will be
// printed and compilation will stop before a screen
// worth of more confusing errors are encountered.
// This function takes any number of ranges to merge sort
// and the same number of temporary buffers to merge into.
void mergeSort(R...)(R ranges) {
static assert(R.length % 2 == 0,
"Must have equal number of ranges to be sorted and temporary buffers.");
static assert(allSatisfy!(isRandomAccessRange, R),
"All arguments to mergeSort must be random access ranges.");
// Implementation
}
Eiffel has the best of all error messages because it is has the best of all template systems. It is fully integrated into the language and works well because it is the only language which is using covarianz in arguments.
Therefore it is much more then a simple compiler copy and paste. Unfortunately explaining the difference in a few lines is impossible. Just go and have a look at EiffelStudio.
There are some efforts to improve the error messages. Clang, for example, has put quite a lot of emphasis on generating more easily readable compiler error messages. I've only been using it for a short while, but my experience of it so far has been quite positive compared to GCC's equivalent errors.