Visual C++ features #pragma warning that among other things allows to change the warning level of a specific warning. Say warning X has level 4 by default, then after
#pragma warning( 3, X )
it will have level 3 from that point.
I understand why I would temporarily disable a warning, turn a warning into an error or turn on a waring that is off by default. But why would I change a warning level?
I mean warning level of a specific warning works together with the compiler "warning level" option /W, so I currently can't imagine a setup in which I would want to change a level of an individual warning since the warning being emitted or not will depend on the project settings.
What are real-life examples of when I really need to change a warning level of a specific warning in C++?
When you want to run on level 3 or 4, but you want to see/not see a specific message, you can change its level. Sure, the pragma might have no effect if the warning level isn't what you think, but that's life with pragmas.
It is much easier to take the one or two level 4 warnings you want detected and use the pragma to bring them into level 3 than it is to use /W4 and disable (/Wd####)the all the warnings you don't care about.
An example of where I use this is error C4996 where I move it from being a level 3 warning to level 4. For instance, the VS compiler warns that sprintf may be unsafe and suggests you use the non-portable sprintf_s function instead. You can get lots of these warnings and rather than wading through all of these to find the real errors, I prefer to just prevent them being issued.
I know I could define a macro for sprintf (and loads of other functions) instead, which would maintain portability, and strictly that should be done, but in a large project, tracking down and fixing all these warnings is some effort, for probably very little return.
In some cases when you are working with legacy code and due a newer compiler becoming more sensitive you may want to turn down the warning level for those modules to avoid thousands of warnings. Also in some cases when you interact with generated code you would like to turn down the warning level. e.g. ANTLR generates C-code that you treat as a black box so you don't want warnings then.
My experience is to treat warnings and errors and strive to produce code without warnings. I know that this might seem like a stupid or overly strict approach, but it really does pay off in the long run. Trust me!
Related
We have a large C++ project with warnings as errors enabled. We would like to deprecate some old APIs, and naturally our first thought was to turn to the [[deprecated]] language feature. This however triggers a -Wdeprecated-declarations warning, which is turned into an error and fails the build.
Now, we know we can disable the error for that particular warning via -Wno-error=deprecated-declarations. But still the build log would be full of compiler warnings, making it much harder to spot true compiler errors.
I wonder then if people have better solutions to deal with C++ deprecations in practice, in real-world large projects?
Well, you can't have your cake and eat it too: If you don't want usage of deprecated functions to throw an error (which can be fine) but neither to see the warnings - why deprecate them in the first place ?
You can suppress single warnings by number (see here for a VC++ solution: https://stackoverflow.com/a/7159392/20213170),
but the correct way is really just getting rid of the deprecated functions calls and update you API.
This is a naive approach, but couldn't you just do something like :
#ifdef WARNING_DEPRECATED_ON
# define ATT_DEPRECATED __attribute__ ((deprecated))
#else
# define ATT_DEPRECATED
#endif
You can locally have one commit where you deprecate the api, verify that commit doesn't build, then have another commit that removes the uses of the deprecated api.
If you can't push a failing build, at that point you can squash those two local commits into one "remove deprecated api" commit.
But still the build log would be full of compiler warnings, making it much harder to spot true compiler errors.
This isn't completely true. There are a whole bunch of -Wdeprecated-declarations warnings, but any other error fails the build. It's not hard to ctrl-f "error:" in the build log.
Would like to know if a lot of warning can increase the website build time using 'task: VSBuild#1' and 'msbuildArgs' as input
Would like to know if a lot of warning can increase the website build time using 'task: VSBuild#1' and 'msbuildArgs' as input
Agree with KrzysztofMadej. A large number of warnings will increase the compiler's compilation time. When the compiler encounters a warning, it needs to do some simple actions and warning positioning and analysis, so that it can give a meaningful warning prompt in the log.
For a scenario we often encounter, when compiling the project to reference the assembly, the correct compilation will directly find the matching assembly, but if the assembly is not found, the compiler will try to go to some msbuild/extension, framework folder to search for the required dll files. These additional operations will undoubtedly increase the time of the compiler.
Is there any property for 'msbuildArgs' through which we can suppress
them?
You could use msbuildArgs /p:WarningLevel=X supress the warning level for those C# warnings (e.g. CS0618):
Warning
Level Meaning
-------- -------------------------------------------
0 Turns off emission of all warning messages.
1 Displays severe warning messages
2 Displays level 1 warnings plus certain, less-severe warnings, such
as warnings about hiding class members
3 Displays level 2 warnings plus certain, less-severe warnings, such
as warnings about expressions that always evaluate to true or false
4 (the default) Displays all level 3 warnings plus informational warnings
For those MSBuild warnings, it can't be suppressed.
You could check this thread for some more details.
I would like advice how to proceed in such situation.
Imagine I have large C++ project which works well.
I have suspicion there might be some UB in this code (because in different project written by same author I found UB).
Now, say I need to add new features to this project.
I am afraid because:
if I recompile with new compiler this can increase risk of UB happening if in the code is UB already. (e.g. new compiler might not be OK with UB which the old compiler was fine with).
Is it realistic to eliminate all UB in this large project by eye inspection (before I move to adding new feature)??
If not, then I should at least compile with same version of compiler right? (to decrease chance of problems if there is UB).
Project is done in Visual Studio so I don't know if there are object files, in which case, I could leave object files same and only modify parts in files where I need to add something - thus again minimizing risk of UB.
What is the course of action in such situation? I think this could be pretty common scenario.
I like suggestion that I test the project using new compiler before adding new code, but even then - we know testing might not reveal UB, isn't it?
In order, I would:
Compile with -Wall (/W4 for you Windows folk) and fix errors.
Write tests if there aren't any already.
Use tools like valgrind to detect issues and fix them.
Study synchronization primitives if in use, and use modern paradigms where possible.
Document the code and adhere to a style guide.
I would not attempt to avoid problems by keeping object files around. That's a nightmarish maintenance problem.
Undefined Behavior = Bugs
It's impossible to prove that a project is bug-free. Even the best programmers do create bugs. Even the best code-review cannot eliminate all bugs in a project. No, it's not realistic to eliminate all UB in a project of some size by code inspection or by any other means. Your best option is to review the code and eliminate as many as possible.
Change your perception of UB (bugs): If you encounter a bug during your re-engineering efforts, it's a good thing! You are in the best position to remove one UB.
Don't keep the old compiler just because you are afraid of UB. Recompile the project with the latest and best compiler available. Compilers can also have bugs. Newer compilers will produce better, more robust code. Newer compilers will produce better warnings. Use all warnings possible -Wall.
Eliminate all the warnings that the compiler produces. Every single warning is there for a reason, it highlights a problem. The likelihood of a "false positive" is quite dim nowadays. This is even true for MSVC (I'm not talking about real old compilers like before VC 2005)
Use a static code checker (Cppcheck). It can point you to common problems with the code.
Use a custom rule set for your code checker. It will help you to get the code up to some standard.
If possible, compile the project with another compiler (GCC, Clang) just for the sake of getting the warnings of these compilers.
Don't link against old object files. This will create more problems than what you think it avoids
As others said: First and foremost, try to find the errors, not hide them.
The first and simplest measure is to set the warning level to /W4 (you can try Wall, but due to the large amount of noise this will produce (e.g. from standard headerfiles), it is usually only of help if you know you have an error in a certain part of your code)
Use static analyzers - you can start with the builtin Code Analysis tool and then go for external tools (which are usually much more difficult to set up correctly for a non-trivial project).
Write lots of tests and make sure, you are exercising edge cases - thats where UB usually lurks.
If possible, try to compile the project (or parts of it) under clang and activate the different sanitizers (in particular there is UndefinedBehaviorSanitizer) which will further instrument your code to check for UB (only helpfull if you have tests to exercise that UB though)
Test your code at different optimization levels and combination of flags (in VS, especially _ITERATOR_DEBUG_LEVEL can be helpfull to find out-of-bounds errors)
I'd say any non-trivial code base potentially contains undefined behavior. What is special about that particular Programmer? If he/she is prone to a special kind of UB, then you can focus your efforts on this.
I've inherited a large C++ codebase for several Windows applications that successfully is in use by many customers.
The codebase is large, >1mill LOC.
The codebase has a history of 15+ years.
The codebase is in some areas dominated by C programming style and/or not very modern C++ style, e.g. not using Standard C++ collections and algorithms.
The codebase has unfortunately only been compiled with warning level 2 (/W2 in Visual C++). I would like to increase to level 3 (/W3) to increase security and prepare for 64-bit.
The most problematic point in the increase to warning level 3 are the many warnings received involving signed/unsigned mismatches and I recognize that it will be a very large task to resolve all those for the existing codebase.
What would be a good approach to ensure and enforce that new code committed to the codebase are compiled with the increased warning level?
In more general terms the question could be rephrased to how you enforce increased programming quality into new committed code. If you don't do anything, new code has, in my experience, a tendency to be affected and styled similar to existing code, rather than being improved to more modern standards.
I would even go as far as going to warning level 4 (/W4).
Since you're using Visual Studio, it's quite easy to suppress bothersome warnings like signed vs unsigned comparision:
#pragma warning(disable:NNNN)
Where NNNN is the number of your warning. Now put all those disabled warnings in a header file (say, "tedious_warnings.h") and force-include that header file everywhere - Project Properties -> C/C++ -> Advanced -> Forced Include File.
Later on, or better, ASAP, remove the force include and work your way through the warnings, since most of them are quite easy to fix (size_t instead if int, etc).
Perhaps you could create new code in separate DLLs or Libraries. That way you can enforce your higher warning level (and I would say go for /W4 and be prepared to turn off a few of MS's dafter warnings rather than settle for /W3) without having to wade through 1000s of warnings from the old code.
Then you can start working on cleaning up the old code, a bit at a time, when there is time -- making sure you have suitable unit tests in place to avoid breaking it by accident of course.
you may not like the answer...
remove the warnings by correcting the issues.
i'm very picky about warning levels; even i ignore warnings which i don't need to correct, especially when the warning level is high and build times are high. meanwhile, new ones slip in (in large codebases). removing them incrementally doesn't work very well, in my experience -- they tend to get ignored if the noise is too high, or it is not enforced.
you need to reduce the warning noise so people can see the warnings they add (at the warning level you desire).
to reach the compliance level you want/need, make it a priority.
if you don't know whether the conversions/comparisons are valid, you can always use a template function with an error action (assert, throw, log) to perform the logic when in doubt.
it can be a slow/tedious process, but it's also a good way to learn the codebase.
i typically start at the libraries highest in the tree, or those which are reused most often. once a library meets a standard, maintain that standard.
If you are going to make code modifications as a result of the new stricter warning level, write adequate tests that protect against introducing new problems/bugs. Write the tests using the new warning level. Do this before you start to change the codebase and verify correct functionality. Then you can rerun the updated code against the same test case.
I would use an incremental approach.
The first step would be to modify the old files and add the required pragma action to deactivate the warning in the code.
The second step is to build a commit-hook that will refuse any committed file that contain the specific pragma pattern that discards all those "old" warnings.
This means that any modified file should be warning free.
However, let us be frank, developers always find ways to game the system.
My approach has been to go with the highest warning level you can and fix all the warnings that come up - you may even find some bugs in the process.
You should set this up using vsprops files so that all projects are compiled with the same warning level and any changes you make to these settings change in all projects.
A more incremental approach is to go with the highest warning level you can and then disable almost all warnings, leaving you with only a small number of warnings to consider at once - fix those and then switch on another warning, and so on until you are free of warnings.
How to debug uninitialized variables in release mode in C++.
There's a warning for this. You should try to always compile cleanly at the highest warning level. For VC++ this is level 4. Turn off specific annoying warnings selectively only.
Also, unless you deliberately uncheck the option, VC++ will compile with /RTCu (or even /RTCsu) which puts in checks to catch uninitialized variables at run-time.
Of course, proper programming style (introduce variables as late as possible) will prevent such errors from happening in the first place.
Generally, rather than debugging uninitialized variables, you want to prevent the very possibility, such as using classes/objects with ctors, so creating one automatically and unavoidably initializes it.
When you do use something like an int, it should generally be initialized as it's created anyway, so uninitialized variables will be pretty obvious from simple inspection (and you generally want to keep your functions small enough that such inspection is easy).
Finally, most decent compilers can warn you about at least quite a few attempts at using variables without initialization. Clearly such warnings should always be enabled. One important point: these often depend on data-flow analysis that's intended primarily for optimization, so many compilers can/will only issue such warnings when you enable at least some degree of optimization.
I don't know about VC++, but for gcc, there is a warning option -Wuninitialized that can be used while compiling. Details: http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
Append: -Wuninitialized is included in -Wall, i.e warn all, one of the recommended and most used warning flag. In addition, having -Werror would fail the compilation whenever any such warning arises.
Uninitialized variables are a nasty bug to find. Some static checkers would probably be able to find your uninitialized variable. There are open source ones. You might be able to get a trial version of commercial version as well.
If you do not have debugger, you need to add logging statements in your code wherever you want to see the values of variables which you suspect uninitialized.
Sometimes, logging statement may lead to crash if passed an uninitialized pointer. So you can catch the bug there itself in this case.
You need to build release binaries with debug symbols. Here is a reference that may be helpful if you are on Visual Studio.
There must be something analogous for other implementations as well.
Use something like CPPcheck (open-source) or PC-Lint (commercial) to check for them. They will help find a lot of other errors.