Recently, when porting some STL code to VS2008 I wanted to disable warnings generated by std::copy by defining the new _SCL_SECURE_NO_WARNINGS flag. You can do this in two ways:
Using the /D compiler switch, which can be specified in the project properties. You need to ensure it is defined for both Release and Debug builds, which I often forget to do.
By defining it macro style before you include the relevant STL headers, or, for total coverage, in stdafx.h:
#define _SCL_SECURE_NO_WARNINGS
Both of these methods work fine but I wondered if there was any argument for favouring one over the other?
The /D option is generally used when you want to define it differently on different builds (so it can be changed in the makefile)
If you will "always" want it set the same way, use #define.
By putting them in your project file you maintain a close association between the platform specific warnings and the platform, which seems correct to me.
If they're in the code, they're always in the code whether or not it's appropriate for the platform. You don't need it for GCC or possibly future versions of Visual C++. On the other hand, by having it in the code, it's more obvious that it's there at all. If you move (copy) the code, it'll be easier to remember to move that define with it.
Pros and Cons each way. YMMV.
If you have a header that is included in all others (like that stdafx.h), you should put that there. The compiler command line switch is used usually for build options, that are not always set, like NDEBUG, UNICODE and such things. While your macro would essential always be set.
That might sound arbitrary. And indeed, some might say other things. At the end, though, you have to decide what fits your situation.
If you do put them in your code, remember to ifdef them properly:
#ifdef _MSC_VER
#define _SCL_SECURE_NO_WARNINGS
#endif
This will keep your code portable.
In general I prefer putting #define's in the code as opposed to using the /D compiler switch for most things because it seems to be more intuitive to look for a #define than to check compiler settings.
/D isn't a valid flag for msbuild.exe (at least the version I'm using v2.0.50727).
The way this is done is:
/p:DefineConstants="MY_MACRO1;MY_MACRO2"
The result of doing this is:
Target CoreCompile:
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Csc.exe /define:MY_MACRO1;MY_MACRO2 ...
Related
This blog page mentions that Visual Studio removes some std features:
https://blogs.msdn.microsoft.com/vcblog/2017/12/08/c17-feature-removals-and-deprecations/
I have a project that consumes some C++ libraries that now use C++17 features. The project also consumes a third party library websocketpp (https://github.com/zaphoyd/websocketpp) that still uses some now removed features. eg auto_ptr and binary_function. I'm getting compiler errors that they are not a member of 'std'.
The blog above mentions that removed features can be restored using a fine grain control. I'm thinking I could use that to get this project to compile for now. Longer term I will see about upgrading websocketpp to C++17 or replacing it with something else.
But, what is the magic to restore features? Is there something I need to #define? If so, what?
In VS2017 v15.5 it is conditionally excluded, based on the project's /std:c++17 setting. You can force it to be included by forcing the underlying macro value. Two basic ways to do this:
Project > Properties > C/C++ > Preprocessor > Preprocessor Definitions and add _HAS_AUTO_PTR_ETC=1. Do so for all configurations and platforms.
If you use a precompiled header then you probably favor defining the macro there. Before any #includes, insert #define _HAS_AUTO_PTR_ETC 1.
Beware of the "ETC", you'll also slurp the deprecated random_shuffle() and unary_function<>. Predicting the future is difficult, but this is probably going to work for a while to come.
Which preprocessor define should be used to specify debug sections of code?
Use #ifdef _DEBUG or #ifndef NDEBUG or is there a better way to do it, e.g. #define MY_DEBUG?
I think _DEBUG is Visual Studio specific, is NDEBUG standard?
Visual Studio defines _DEBUG when you specify the /MTd or /MDd option, NDEBUG disables standard-C assertions. Use them when appropriate, ie _DEBUG if you want your debugging code to be consistent with the MS CRT debugging techniques and NDEBUG if you want to be consistent with assert().
If you define your own debugging macros (and you don't hack the compiler or C runtime), avoid starting names with an underscore, as these are reserved.
Is NDEBUG standard?
Yes it is a standard macro with the semantic "Not Debug" for C89, C99, C++98, C++2003, C++2011, C++2014 standards. There are no _DEBUG macros in the standards.
C++2003 standard send the reader at "page 326" at "17.4.2.1 Headers"
to standard C.
That NDEBUG is similar as This is the same as the Standard C library.
In C89 (C programmers called this standard as standard C) in "4.2 DIAGNOSTICS" section it was said
https://port70.net/~nsz/c/c89/c89-draft.html
If NDEBUG is defined as a macro name at the point in the source file
where <assert.h> is included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
If look at the meaning of _DEBUG macros in Visual Studio
https://learn.microsoft.com/en-us/cpp/preprocessor/predefined-macros
then it will be seen, that this macro is automatically defined by your сhoice of language runtime library version.
I rely on NDEBUG, because it's the only one whose behavior is standardized across compilers and implementations (see documentation for the standard assert macro). The negative logic is a small readability speedbump, but it's a common idiom you can quickly adapt to.
To rely on something like _DEBUG would be to rely on an implementation detail of a particular compiler and library implementation. Other compilers may or may not choose the same convention.
The third option is to define your own macro for your project, which is quite reasonable. Having your own macro gives you portability across implementations and it allows you to enable or disable your debugging code independently of the assertions. Though, in general, I advise against having different classes of debugging information that are enabled at compile time, as it causes an increase in the number of configurations you have to build (and test) for arguably small benefit.
With any of these options, if you use third party code as part of your project, you'll have to be aware of which convention it uses.
The macro NDEBUG controls whether assert() statements are active or not.
In my view, that is separate from any other debugging - so I use something other than NDEBUG to control debugging information in the program. What I use varies, depending on the framework I'm working with; different systems have different enabling macros, and I use whatever is appropriate.
If there is no framework, I'd use a name without a leading underscore; those tend to be reserved to 'the implementation' and I try to avoid problems with name collisions - doubly so when the name is a macro.
Be consistent and it doesn't matter which one. Also if for some reason you must interop with another program or tool using a certain DEBUG identifier it's easy to do
#ifdef THEIRDEBUG
#define MYDEBUG
#endif //and vice-versa
Unfortunately DEBUG is overloaded heavily. For instance, it's recommended to always generate and save a pdb file for RELEASE builds. Which means one of the -Zx flags, and -DEBUG linker option. While _DEBUG relates to special debug versions of runtime library such as calls to malloc and free. Then NDEBUG will disable assertions.
Despite the name, NDEBUG has nothing to do if you are creating a debug build or not, it controls whether assertions (assert()) are active or not. I would not base anything else on it, as you may want to have debug builds without assertions or release builds with assertions from time to time and then you must set NDEBUG accordingly but that doesn't mean you also want all other code to be debug or release code.
From the perspective of compilers, there is not such thing as a debug build. You tell the compiler to build code with a specific set of settings and if you want to use different settings for different kinds of builds, then this is something you actually made up yourself and the compiler knows nothing about that. You may actually have 50 different build styles and not just release and debug (profile, test, deploy, etc.), so it's up to you how these styles are identified in your own code. If you need pre-processor tags for these, you define how those are named and the same name space rules applies as for everything else you'd define in your code.
MSVC defines _DEBUG in debug mode, gcc defines NDEBUG in release mode. What macro can I use in clang to detect whether the code is being compiled for release or debug?
If you look at the project settings of your IDE, you will see that those macros are actually manually defined there, they are not automatically defined by the compiler. In fact, there is no way for the compiler to actually know if it's building a "debug" or "release", it just builds depending on the flags provided to it by the user (or IDE).
You have to make your own macros and define them manually, just like the IDE does for you when creating the projects.
Compilers don't define those macros. Your IDE/Makefile/<insert build system here> does. This doesn't depend on the compiler, but on the environment/build helper program you use.
The convention is to define the DEBUG macro in debug mode and the NDEBUG macro in release mode.
You can use the __OPTIMIZE__ flag to determine if optimization is taking place. That generally means it is not a debug build since optimizations often rearrange the code sequence. Trying to step through optimized code can be confusing.
This probably is what those most interested in this question really are attempting to figure out.
There is no such thing as a debug mode in a command line compiler. That is a IDE thing: it just sets up some options to be sent to the compiler.
If you use clang from the command line, you can use whatever you want. The same is true for gcc, so if with gcc you use NDEBUG you can use just the same.
How can I disable all MSVC warnings that come from the boost library?
I know I can disable specific warnings where they occur etc... but it clutters my code and if I use boost macros then they don't seem to work. I would like to have a simple way to tell my compiler to not give me warnings about boost. Is this possible?
On a secondary note, I'm a bit surprised that the boost library doesn't disable all these warnings internally so that we users can use it "out of the box".
They try extremely hard to avoid warnings, but some compilers warn for code that is formally correct, just a bit "suspicious". If you change the code to silence the warning, another compiler might warn for that code!
There is a warning policy for Boost code and various compilers
https://svn.boost.org/trac/boost/wiki/Guidelines/WarningsGuidelines
They are also particularly careful not to disable warnings, because you might have some parts of your code where the warning is actually correct. If Boost disables the warning, you might not find the errors in your code!
You can disable the warnings for all projects by changing the default property pages:
Open any project.
Click on
view->property manager.
In the
property manager (probably along the
left bar), expand the project, then
expand one of the profiles, then
double-click on one of the categories
that all your projects will be using:
Microsoft.Cpp.Win32.user,
Application, or perhaps Core Windows
Libraries.
This brings up the
Properties page, but for all code that you write or will write. Set the
appropriate pre-processor definitions
and disable /wp64 or whatever you
need to do for an individual project.
Since it's probably not desirable to disable those warnings for all projects, it seems like you could disable warnings in visual_c.hpp as described here: Boost warnings with VC++ 9 . But then you'll have to make the change every time you update your libraries.
The first thing that comes to mind is to create special header-file in which to put all Boost #includes. These #includes should be surrounded with blocks of #pragma
#pragma warning(push, 0)
#include <boost/bimap.hpp>
#include <boost/function.hpp>
#pragma warning(pop)
Disadvantage of this way: some compile-time inefficiency
I'm trying to build a small code that works across multiple platforms and compilers. I use assertions, most of which can be turned off, but when compiling with PGI's pgicpp using -mp for OpenMP support, it automatically uses the --no_exceptions option: everywhere in my code with a "throw" statement generates a fatal compiler error. ("support for exception handling is disabled")
Is there a defined macro I can test to hide the throw statements on PGI? I usually work with gcc, which has GCC_VERSION and the like. I can't find any documentation describing these macros in PGI.
Take a look at the Pre-defined C/C++ Compiler Macros project on Sourceforge.
PGI's compiler has a __PGI macro.
Also, take a look at libnuwen's compiler.hh header for a decent way to 'normalize' compiler versioning macros.
You could try this to see what macros are predefined by the compiler:
pgcc -dM
Maybe that will reveal a suitable macro you can use.
Have you looked at the boost headers? Supposing they support PGI, they will have found a way to detect it. You could use that. I would start to search somewhere in boost/config.