A recent minor release version of Visual Studio (15.7) implements the compiler switch /permissive-, which among other effects, enables two-phase name lookup for templates (https://learn.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance). I'd like to enable this switch on our codebase of about 4M lines of code. The problem is, the instantiated templates might change if this switch is enabled (see the first example at https://blogs.msdn.microsoft.com/vcblog/2017/09/11/two-phase-name-lookup-support-comes-to-msvc/), which lead to silent changes in the run-time behavior.
Is there a way to check whether the code generated with this conformance switch enabled is identical to the old code?
I'm aware that the correct answer is "you run your unit tests, duh!". Sadly with the amount of legacy code lying around, this is out of reach for the next couple of years, so I'm looking for an alternative.
Doing a compare on the binaries does not help, since there are differences present due to metadata changes.
New compilation errors aren't really of any help either: they only expose non-conforming syntax; changes in generated code can still be hidden.
Actually seeing the generated code is not important. A tool showing "this line will compile differently" would be sufficient. Or something similar to what "Preprocess to a File" does for the preprocessor.
The best I can think of is checking generated symbols via the dumpbin tool on every .obj file to see whether the same ones are being generated. This can expose only a subset of issues: the set of template instances in a file might be identical, but their locations in the code might be change.
You may analyze the compiler output (at assembly level) to check for differences generated with the 2 compiler settings you mentioned.
Please look at this related SO question about Visual C++ compilers.
Also, better suited for relatively small portions of code,
you may want to use the marvelous GodBolt's Compiler Explorer,
that does the same task but with an handy web interface and extended to various compilers (not only Microsoft Visual C++).
I'm sure it will become one of the most valuable tools in your developer's toolset.
I'm working with a legacy C code which I need to document in UML. There's no immediate requirement to use those UML diagrams for synthesis, but there is a desire to go in that direction in the future.
Now, the code is riddled with features which can be enabled or disabled at compile time:
#if(FEATURE_X == ON)
deal_with_x();
#endif
Since there's no way to distinguish between compile-time and run-time conditions in UML (is there?), I end up using the same decision block for both, which means my diagrams really represent the following code:
if(FEATURE_X == ON) {
deal_with_x();
}
While I expect the compiler to eliminate the call when feature X is disabled, this is not quite the same code for at least two reasons:
deal_with_x() has to be defined even if feature X is disabled
static code analysis will complain about dead code
What is the right way to deal with the situation? Is there a UML feature I'm not aware of that could help? Or should I create separate activity diagrams for different configurations (quite a work)? Or should I rely on the compiler to eliminate unnecessary calls and avoid using precompiler directives altogether?
While my question is about C code and precompiler directives, I can see the same problem can arise with C++ templates, especially if static if gets introduced in the language.
Simply go for tagged values to describe that.
This has nothing to do with any activity diagram. This is a pure static deployment thing. So you may create components which use different tagged values for different compilation.
Compile-time conditions tell you how to generate your code. So this will go to some deployment part of your model. The activity diagram refer to behavior of your system. In case you have a target component which is compiled the one or other way and you show its different usages in activity diagrams you can signal this in various ways. One is a naming convention which is described somewhere else in the model or an accompanying documentation. Another way is to create requirements which state to create a common source and link those requirements to the activities and the later components.
As a (personal) side note: usage (esp. over-usage) of compile time options makes your code hard to read up to unreadable. Indeed, each use of a compile time option will make the same source some completely different thing. So rather to start with "I want to have the same source for this function and therefore tend to use compiler-flags" go the other way. Concentrate on function and when it comes to deployment then eventually think of "optimization" towards using compiler flags. So actually, leave thought about it completely out of activity diagrams.
Is there any way to take advantage of Microsoft's SAL, e.g. through a C parser that preserves this information? Or is it made by Microsoft, for Microsoft's internal use only?
It would be immensely useful for a lot of tasks, such as creating C library bindings for other languages.
Not sure what you mean by "take advantage of", but currently the VS 2011 Beta uses the SAL annotations when performing code analysis, via the the /analyze option. the annotations are just pure macro's from sal.h which Microsoft encourages the use of (at least in a VS environment).
If you just want to preserve the info after a preprocessing step, you could just make the macro's expand to themselves or just alter one of the exisitng open-source pre-processors to exclude the symbols (VS also has a few expansion options from the SAL macro's), but using the information provided by the annotations will require something along the lines of a custom LLVM pre-pass or GCC plugin to do this (if compiling the code, though you can at the same time use them for binding generation).
SAL annotations can find tons of bugs with static analysis.
http://msdn.microsoft.com/en-us/library/windows/hardware/hh454825(v=vs.85).aspx
I have never had to set it from scratch, but my development environment will use prefast to do static analysis everytime I build something. Finding bugs at compile time is better than finding them at runtime.
Source annotations as far as my own personal experience has seen, is a useful way to quickly see how parameters are supposed to be passed or how they are assumed to be passed. As far as taking advantage of that, I agree that a prepass might be the only way to take real advantage, and might i suggest writing your own if you have specific needs or expectations on it's output.
Hope I helped..
I was looking at libstdc++ documentation at http://gcc.gnu.org/onlinedocs/libstdc++/latest-doxygen/a01618.html and found it arranged in "modules" such as Algorithm, Strings etc
I have multiple questions
Since this is auto-generated documentation from doxygen, which part of libstdc++ source code or config file, makes doxygen "aware" of different modules and their contents/dependencies?
What is modules and how does it differ from namespace.
I did a google search on c++ modules and found that modules are defined by "export modulename", but i could not find any export definition in libstdc++ source code. Does the word "Modules" in the above documentation refer to some different construct than export ?
Do developers typically divide their source code into modules for large projects?
where can i learn about modules, so that i can organize my source code into modules and namespaces
It looks to me like you're running into two entirely separate things that happen to use the same name.
The "modules" you're seeing in the documentation seem to be just a post-hoc classification of the algorithms and such. It may be open to argument that they should correspond closely to namespaces, but in the case of the standard library, essentially everything is in one giant namespace. If it were being designed using namespaces from the beginning it might not be that way, but that's not how things happened. In any case, the classification applies to the documentation, not to the code itself. Somebody else producing similar documentation might decide to divide it up into different modules, still without affecting the code.
During the C++11 standardization effort, there was a proposal to add something else (that also went by the name modules) to the C++ language itself. This proposal was removed, primarily in the interest of finishing the standard sooner. The latter differed from namespaces quite a bit, and is the one that used "export" for a module name. It's dead and gone (a least for now) though, so I won't go into a lot more detail about it here. If you're curious, you can read Daveed Vandervoorde's paper about it though.
Update: The committee added modules to C++ 20. What was added is at least somewhat different from anything anybody would have known about in 2012 when this question was asked, but it is at least pretty much the same general idea as the modules that were proposed for C++11. A bit much to add on to a 10 year-old answer, but here's a link to at least some information about them:
https://en.cppreference.com/w/cpp/language/modules
The modules you see in the documentation are created by Doxygen and are not a part of C++. Certain classes in libstdc++ library are grouped together into modules using the \ingroup Doxygen command.
See: http://www.doxygen.nl/manual/grouping.html for more information on creating modules/groups in Doxygen.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
While C++ Standards Committee works hard to define its intricate but powerful features and maintain its backward compatibility with C, in my personal experience I've found many aspects of programming with C++ cumbersome due to lack of tools.
For example, I recently tried to refactor some C++ code, replacing many shared_ptr by T& to remove pointer usages where not needed within a large library. I had to perform almost the whole refactoring manually as none of the refactoring tools out there would help me do this safely.
Dealing with STL data structures using the debugger is like raking out the phone number of a stranger when she disagrees.
In your experience, what essential developer tools are lacking in C++?
My dream tool would be a compile-time template debugger. Something that'd let me interactively step through template instantiations and examine the types as they get instantiated, just like the regular debugger does at runtime.
In your experience, what essential developer tools are lacking in C++?
Code completion. Seriously. Refactoring is a nice-to-have feature but I think code completion is much more fundamental and more important for API discoverabilty and usabilty.
Basically, tools that require any undestanding of C++ code suck.
Code generation of class methods. When I type in the declaration you should be able to figure out the definition. And while I'm on the topic can we fix "goto declaration / goto definition" always going to the declaration?
Refactoring. Yes I know it's formally impossible because of the pre-processor - but the compiler could still do a better job of a search and replace on a variable name than I can maually. You could also syntax highlight local, members and paramaters while your at it.
Lint. So the variable I just defined shadows a higher one? C would have told me that in 1979, but c++ in 2009 apparently prefers me to find out on my own.
Some decent error messages. If I promise never to define a class with the same name inside the method of a class - do you promise to tell me about a missing "}". In fact can the compiler have some knowledge of history - so if I added an unbalanced "{" or "(" to a previously working file could we consider mentioning this in the message?
Can the STL error messages please (sorry to quote another comment) not look like you read "/dev/random", stuck "!/bin/perl" in front and then ran the tax code through the result?
How about some warnings for useful things? "Integer used as bool performance warning" is not useful, it doesn't make any performance difference, I don't have a choice - it's what the library does, and you have already told me 50 times.
But if I miss a ";" from the end of a class declaration or a "}" from the end of a method definition you don't warn me - you go out of your way to find the least likely (but theoretically) correct way to parse the result.
It's like the built in spell checker in this browser which happily accepts me misspelling wether (because that spelling is an archaic term for a castrated male goat! How many times do I write about soprano herbivores?)
How about spell checking? 40 years ago mainframe Fortran compilers had spell checking so if misspelled "WRITE" you didn't come back the next day to a pile of cards and a snotty error message. You got a warning that "WRIET" had been changed to WRITE in line X. Now the compiler happily continues and spends 10mins building some massive browse file and debugger output before telling you that you misspelled prinft 10,000 lines ago.
ps. Yes a lot of these only apply to Visual C++.
pps. Yes they are coming with my medication now.
If talking about MS Visual Studio C++, Visual Assist is a very handy tool for code completition, some refactorings - e.g. rename all/selected references, find/goto declaration, but I still miss the richness of Java IDEs like JBuilder or IntelliJ.
What I still miss, is a semantic diff tool - you know, one which does not compare the two files line-by-line, but statements/expressions. What I've found on the internet are only some abandoned tries - if you know one, please write in comment
The main problem with C++ is that it is hard to parse. That's why there are so very few tools out there that work on source code. (And that's also why we're stuck with some of the most horrific error messages in the history of compilers.) The result is, that, with very few exceptions (I only know doxygen and Visual Assist), it's down to the actual compiler to support everything needed to assist us writing and massaging the code. With compilers traditionally being rather streamlined command line tools, that's a very weak foundation to build rich editor support on.
For about ten years now, I'm working with VS. meanwhile, its code completion is almost usable. (Yes, I'm working on dual core machines. I wouldn't have said this otherwise, wouldn't I?) If you use Visual Assist, code completion is actually quite good. Both VS itself and VA come with some basic refactoring nowadays. That, too, is almost usable for the few things it aims for (even though it's still notably less so than code completion). Of course, >15 years of refactoring with search & replace being the only tool in the box, my demands are probably much too deteriorated compared to other languages, so this might not mean much.
However, what I am really lacking is still: Fully standard conforming compilers and standard library implementations on all platforms my code is ported to. And I'm saying this >10 years after the release of the last standard and about a year before the release of the next one! (Which just adds this: C++1x being widely adopted by 2011.)
Once these are solved, there's a few things that keep being mentioned now and then, but which vendors, still fighting with compliance to a >10 year old standard (or, as is actually the case with some features, having even given up on it), never got around to actually tackle:
usable, sensible, comprehensible compiler messages (como is actually pretty good, but that's only if you compare it to other C++ compilers); a linker that doesn't just throw up its hands and says "something's wrong, I can't continue" (if you have taught C++ as a first language, you'll know what I mean); concepts ('nuff said)
an IO stream implementation that doesn't throw away all the compile-time advantages which overloading operator<<() gives us by resorting to calling the run-time-parsing printf() under the hood (Dietmar Kühl once set out to do this, unfortunately his implementation died without the techniques becoming widespread)
STL implementations on all platforms that give rich debugging support (Dinkumware is already pretty good in that)
standard library implementations on all platforms that use every trick in the book to give us stricter checking at compile-time and run-time and more performance (wnhatever happened to yasli?)
the ability to debug template meta programs (yes, jalf already mentioned this, but it cannot be said too often)
a compiler that renders tools like lint useless (no need to fear, lint vendors, that's just wishful thinking)
If all these and a lot of others that I have forgotten to mention (feel free to add) are solved, it would be nice to get refactoring support that almost plays in the same league as, say, Java or C#. But only then.
A compiler which tries to optimize the compilation model.
Rather than naively include headers as needed, parsing them again in every compilation unit, why not parse the headers once first, build complete syntax trees for them (which would have to include preprocessor directives, since we don't yet know which macros are defined), and then simply run through that syntax tree whenever the header is included, applying the known #defines to prune it.
It could even be be used as a replacement for precompiled headers, so every header could be precompiled individually, just by dumping this syntax tree to the disk. We wouldn't need one single monolithic and error-prone precompiled header, and would get finer granularity on rebuilds, rebuilding as little as possible even if a header is modified.
Like my other suggestions, this would be a lot of work to implement, but I can't see any fundamental problems rendering it impossible.
It seems like it could dramatically speed up compile-times, pretty much rendering it linear in the number of header files, rather than in the number of #includes.
A fast and reliable indexer. Most of the fancy features come after this.
A common tool to enforce coding standards.
Take all the common standards and allow you to turn them on/off as appropriate for your project.
Currently just a bunch of perl scrips usullay has to supstitute.
I'm pretty happy with the state of C++ tools. The only thing I can think of is a default install of Boost in VS/gcc.
Refactoring, Refactoring, Refactoring. And compilation while typing. For refactorings I am missing at least half of what most modern Java IDEs can do. While Visual Assist X goes a long way, a lot of refactoring is missing. The task of writing C++ code is still pretty much that. Writing C++ code. The more the IDE supports high level refactoring the more it becomes construction, the more mallable the structure is the easier it will be to iterate over the structure and improve it. Pick up a demo version of Intellij and see what you are missing. These are just some that I remember from a couple of years ago.
Extract interface: taken a view classes with a common interface, move the common functions into an interface class (for C++ this would be an abstract base class) and derive the designated functions as abstract
Better extract method: mark a section of code and have the ide write a function that executes that code, constructing the correct parameters and return values
Know the type of each of the symbols that you are working with so that not only command completion can be correct for derived values e.g. symbol->... but also only offer functions that return the type that can be used in the current expression e.g. for
UiButton button = window->...
At the ... only insert functions that actually return a UiButton.
A tool all on it's own: Naming Conventions.
Intelligent Intellisense/Code Completion even for template-heavy code.
When you're inside a function template, of course the compiler can't say anything for sure about the template parameter (at least not without Concepts), but it should be able to make a lot of guesses and estimates. Depending on how the type is used in the function, it should be able to narrow the possible types down, in effect a kind of conservative ad-hoc Concepts. If one line in the function calls .Foo() on a template type, obviously a Foo member method must exist, and Intellisense should suggest it in the rest of the function as well.
It could even look at where the function is invoked from, and use that to determine at least one valid template parameter type, and simply offer Intellisense inside the function based on that.
If the function is called with a int as a template parameter, then obviously, use of int must be valid, and so the IDE could use that as a "sample type" inside the function and offer Intellisense suggestions based on that.
JavaScript just got Intellisense support in VS, which had to overcome a lot of similar problems, so it can be done. Of course, with C++'s level of complexity, it'd be a ridiculous amount of work. But it'd be a nice feature.