The Evolution WG Issues List of 14 February 2004 has ...
EP003. #nomacros. See EI001. Note by
Stroustrup to be written.
In rough (or exact) terms, what is #nomacros, and is it available as an extension anywhere? It would have been a useful diagnostic tool in a recent project involving porting thousands of files of 1995-vintage C++ to a 2005 compiler, compared to the alternative of running the code through the preprocessor and examining the .i files for surprise packages.
It is just a proposal under active consideration for inclusion into C++, but still not available in the current compilers. If you read further down the page, it says:
ES042. #nospam.
Provide a preprocessor mechanism for limiting macros entering and exiting a scope. For example:
#nomacros
#in A B
…
#out A X
#endnomacros
No macros are expanded between #nomacros and #endnomacros unless explicitly enabled by #in. No macros defined between #nomacros and #endnomacros will be defined after #endnomacros unless explicitly enabled by #out.
Suggestion by Bjarne Stroustrup. After discussion in the EWG it was decided to look for a solution that allowed macros used by macros allowed in by “#in” to be used in the expansion of such macros only.
#nomacros should nest.
Related
I've been using #include <minmax.h> in my scripts and using min() and max() as expected. I showed this to someone and they had never seen it before, said it wasn't working for them and asked me why I wasn't including <algorithm> and calling std::min() or std::max().
So my question is basically, why aren't I? I found this in a book on C++: "C++ Design Patterns and Derivatives Pricing". Googling "minmax.h", I find a reference to that very book in the top result, so that even more so makes me think it's something abnormal.
Is anyone able to tell me what this is?
The C++ programming language is accompanied by the C++ Standard Library. There is no <minmax.h> header in the C++ Standard Library. No header in the standard-library has the .h extension. Furthermore, the header is not part of the ported C standard library either, as those headers have the c prefix, like <cmath> (which replaces the C standard-library <math.h> header), <ctime>(which replaces the <time.h> header) when used from the C++ Standard Library.
The std::min and std::max functions are declared inside the <algorithm> header.
That being said, there indeed appears to be some MS header called <minmax.h> inside the C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\ucrt folder which defines min and max macros, not functions. But, that is some implementation specific header, and you should be using the standard <algorithm> header instead.
why aren't I?
People do all sort of odd things that they heard about somewhere once, be it in school or that came up as some "solution" that fixed their immediate need (usually under timeline pressure). They then keep doing things the same way because they "work". But I'm glad you stopped for a minute to ask. Hopefully we'll steer you back onto the portable C++ route :)
No, there's no need to use the non-standard minmax.h header. On Windows you need to define the NOMINMAX macro before you include any headers whatsoever, and include <algorithm> right after this macro definition. This is just to free the min and max symbols from being taken over by ill-conceived WINAPI macros. In C++, std::min etc. are in the <algorithm> header and that's what you ought to be using. Thus, the following is portable:
#define NOMINMAX
#include <algorithm>
// other includes
#undef NOMINMAX
// your code here
See this answer for details for Windows.
An ancient reference w.r.t. C++, using ancient compilers, supplying examples using non-standard C++ (e.g. headers such as minmax.h)
Note that the book you are mentioning, C++ Design Patterns and Derivatives Pricing (M.S. Joshi), was first released in 2004, with a subsequent second edition released in 2008. As can be seen in the extract below, the examples in the book relied on successful compilation on ancient compiler versions (not so ancient back in 2004, but still far from recent versions).
Appendix D of the book even specifically mentions that the code examples covered by the book may not be standard-compliant, followed by the pragmatic advice that "[...] fixing the problems should not be hard" [emphasis mine]:
The code has been tested under three compilers: MingW 2.95, Borland 5.5, and Visual C++ 6.0. The first two of these are available for free so you should have no trouble finding a compiler that the code works for. In addition, MingW is the Windows port of the GNU compiler, gcc, so the code should work with that compiler too. Visual C++ is not free but is popular in the City and the introductory version is not very expensive. In addition, I have strived to use only ANSI/ISO code so the code should work under any compiler. In any case, it does not use any cutting-edge language features so if it is not compatible with your compiler, fixing the problems should not be hard.
The compiler releases listed above are very old:
Borland 5.5 was released in 2000,
Visual C++ 6.0 was released in 1998,
GCC 2.95 was released in 1999.
Much like any other ancient compiler it is not surprising that these compilers supplied non-standard headers such as minmax.h, particularly as it seems to have been a somewhat common non-standard convention, based on e.g. the following references.
Gnulib Module List - Extra functions based on ANSI C 89: minmax.h, possibly accessible in GCC 2.95,
Known problems in using the Microsoft Visual C++ compiler, version 6.0:
The MS library does not define the min and max algorithms, which should be found in The workaround we use is to define a new header file, say minmax.h, which we include in any file that uses these functions: [...]
What is the worst real-world macros/pre-processor abuse you've ever come across?:
Real-world? MSVC has macros in minmax.h, called max and min, which cause a compiler error every time I intend to use the standard std::numeric_limits::max() function.
Alternative references for the C++ language
Based on the passage above, the book should most likely be considered primarily a reference for its main domain, quant finance, and not such much for C++, other than the latter being a tool used to cover the former.
For references that are focusing on the C++ language and not its application in a particular applied domain (with emphasis on the latter), consider having a look at:
StackOverfow C++ FAQ: The Definitive C++ Book Guide and List.
From answers and comments on this question, I understand that getenv is defined by the C++ standard, but setenv is not. And indeed, the following program
#include <cstdlib>
#include <iostream>
int main ( int argc, char **argv )
{
std::cout << std::getenv("PATH") << std::endl; // no errors
std::setenv("PATH", "/home/phydeaux/.local/bin:...", true); // error
}
does not compile for me (clang 3.9).
Why was one of these seemingly complementary functions standardised but not the other?
The C90 standard includes getenv(); therefore, the C++98 standard did too.
When the C standard was originally created, the precedent for environment setting was putenv(); the setenv() function was not devised until later. The standard committee avoided creating new functions when it could, but also avoided standardizing problematic functions when possible (yes, localeconv() and gets() are counter-examples). The behaviour of putenv() is problematic. You have to pass it memory which is not of automatic duration, but you can't know whether you can ever use it again. It's like a forced memory leak. It was A Good Thing™ that putenv() was not standardized.
The rationale for the C standard explicitly says (§7.20.4.5, p163):
A corresponding putenv function was omitted from the Standard, since its utility outside a multi-process environment is questionable, and since its definition is properly the domain of an operating system standard.
Platform-specific APIs step in and provide the missing functionality in a way suitable to them.
The first editions of the POSIX standard (1988 trial use; 1990) did not include setenv() or putenv(). The X/Open Portability Guide (XPG) Issue 1 did include putenv() based on its appearance in the SVID (System V Interface Definition) — which did not include setenv(). The XPG Issue 6 added setenv() and unsetenv() (see the history sections for the functions at the URLs linked to). Curiously, on a Mac running macOS Sierra 10.12.6, man 3 setenv has a history section that identifies:
The functions setenv() and unsetenv() appeared in Version 7 AT&T UNIX. The putenv() function appeared in 4.3BSD-Reno.
This is unexpected and probably erroneous since the UNIX Programmer's Manual Vol 1 (1979) does not include any of putenv(), setenv() or unsetenv(). The putenv() function was added to the AT&T variants of Unix at some stage in the 80s; it was in the SVID and documented by the time SVR4 was released in 1990 and may have been part of System III. I think they almost have the platforms reversed. 4.3BSD-Reno was released in June 1990, after both the first C and POSIX standards were released.
There was some discussion in comments with Random832 , now removed, mentioning TUHS – The Unix Heritage Society as a source of information about ancient versions of Unix. The chain included my observation: If nothing else, this discussion emphasizes why the standards committees did well to steer clear of 'setting the environment'! It appears that putenv() was not in 7th Edition UNIX, contrary to my memory. I'm fairly sure it was available in a system I used from 1983, which was a lot of 7th Edition with some material from System III, some from PWB. It is a part of SVR4 (I've a manual for that), and was defined in some version of the SVID (probably before SVR4).
The C rationale also mentions concerns about gets() but included it despite those concerns; it was (very sensibly) removed from C11, of course (but POSIX still refers to C99, not C11).
setenv is not possible in some of the original environments C was defined for.
getenv allows you to see your environment. creating a new process with exec[lv][p][e] allows you to create a child with an inherited or new environment.
However, setenv, would modify the state of the calling process, which wasn't always possible.
I guess it is because it increases the writable interface for the caller, and was not needed originally, and is a security risk these days.
I find the in operator to be somewhat confusing in its implementation. It appears that this is due to the history of its implementation. For instance, according to the sascommunity.org wiki,
You may remember that "in" was not initially well received, so it was
disabled in V9.13.
This implies that a different implementation existed at one time.
Some questions I have are:
Was in implemented differently, in macro and non-macro contexts?
What was used prior to the in operator in macros which prompted the creation of the macro 'in'?
Was in not implemented in macros because the macro facility wasn't originally a part of SAS?
Was the in operator implemented differently in a previous version of SAS (pre-9.4)? If so, how did its implementation then differ from the current approach?
SAS's idiosyncrasies often appear dictated more by historical happenstance than through objective reasoning or design. It seems to me that having such historical knowledge would assist in understanding the SAS language and systems.
Here's what I've found, with a bit of digging:
No differences as far as I'm aware, except for the macro version not requiring quotes.
People wrote their own macros to do the job. %sysfunc(indexw(list to search,word)) seems to have been a popular implementation.
I have no idea why this particular operator was initially left out of the macro language. Plenty of other operators and their mnemonics have worked perfectly well in both macro and non-macro contexts without any bother ever since the macro language was released. You would need to ask the original macro language developers.
As far as I can tell, the history is as follows:
Pre-9.0: in and # were not implemented in the SAS macro language. Users might have written their own %in macros.
In 9.0 in and # were implemented in the SAS macro language, without any option to disable them. In some cases this could have changed the behaviour of existing user-defined macros when handling strings that contained these operators - I suspect this is why this new feature was 'not initially well received'.
In 9.1.2 and 9.1.3, in and # were completely removed from the macro language (presumably this time upsetting people who wrote macros after this functionality was introduced in 9.0...).
In 9.2+, they were re-implemented, disabled by default, and we got the minoperator and mindelimiter options to control their behaviour.
In some future version (9.5 or higher) we might get a %in macro operator, as hinted at by the note displayed in SAS 9.4 when executing a (user-defined) macro named %in:
NOTE: %IN will become a reserved keyword of the SAS Macro Language in a future release of the SAS System. Changing the name of
this macro will avoid future conflicts.
I am very new to C++ and programming in general and am currently working through Bjarne Stroustrup's Programming: Principles and Practices using C++. I'm consistently receiving the error below
Severity Code Description Project File Line Error C2338 <hash_map> is
deprecated and will be REMOVED. Please use <unordered_map>. You can
define _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS to acknowledge that
you have received this warning.
I understand that the header file std_lib_facilities.h using some sort of deprecated function, but is there a way to bypass this? It looks like it wants me to define "_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS" but I'm unsure of how to do that. Any help would be appreciated!!
The warning isn't about "some function" - it's about the whole of stdext. And it's not just hand-wavy, to be discontinued eventually, deprecated: it doesn't ship with 2015.
During the early 00's work was afoot to revise the C++ standard; different compiler vendors, Microsoft included, put proposals before the committee along with prototypes. So they could be tested and evaluated, Microsoft placed implementations of their proposed extensions in stdext.
Eventually the committee chose what they were going to incorporate in that revision and released a Technical Report ("TR1"). Anticipating completion before the end of 2009, this was referred to as "C++0x", and compiler vendors began implementing these features in the tr1 namespace. Finally in 2011 the standard was finalized and we got "C++11" with all its bit and pieces back in std where they belong.
According to Microsoft's proposal, the container would be std::hash_map, but the C++ committee chose to use the term unordered_map. std::map is an ordered container, stdext::hash_map, despite the name, is not.
Microsoft's compiler has been the slowest at getting full C++11 support finished, and the standards committee has since finished a second variation (C++14) and is working on a third (C++17). Microsoft is just-about finishing C++11 in VS2015 and a big chunk of C++14, with a few significant exceptions that are apparently going to be a major problem for the VS compiler (esp constexpr and template variables).
Visual Studio 2015 does not provide stdext - it's gone. This is not one of those "well, it may eventually go away" cases.
stdext is specific to the Microsoft family of compilers, so writing code using stdext:: anything is not portable: http://ideone.com/x8GsKY
The standardized version of the feature you're wanting is std::unordered_map, you should use that. It's essentially the same thing.
There are unresolved bugs in stdext::hash_map.
If you really have to use stdext::hash_map, silence the warning by adding
#define _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS
at the top of the stdafx.h I assume your project has, or in your header files before you #include <stdext/...>, or in the solution explorer:
Right click on your project's entry in solution explorer,
Select Properties,
Select Configuration: All Configurations,
Expand the C/C++ tree entry,
Select Preprocessor,
The "Preprocessor Definitions" will probably say <different options>
At the beginning of the "Preprocessor Definitions" entry add _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS=1; so it reads _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS=1;<different options>.
(or whatever was there originally should follow the ;)
You can put the define prior to your including of the header generating the warning:
#define _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS
#include <hash_map>
You can also add the symbol in Proprocessor Definitions of the project file.
The later looks prettier, but given your doing something against the suggestion of the tool makers, I'd go with the first method, so you don't forget that you might get burnt latter.
It seems that you used old "std_lib_facilities.h" header (stroustrup.com/Programming/std_lib_facilities.h).
New version of this header, working flawless for the "hello,world"-program in MSVS 2015, is available at
stroustrup.com/Programming/PPP2code/std_lib_facilities.h
Found it out when had had the same problem studying PPP.
The nonstandard #pragma once feature is implemented on practically all C++ compilers, but the C++ standard excludes it.
The usual explanation of why #pragma once, or some language construct that does what #pragma once does, has been excluded from the C++ standard is that hard links and copied header files either break #pragma once or provoke the compiler to heuristics. Fair enough, heuristics are normally incompatible with the C++ philosophy anyway, but regarding plain breakage: there are many useful language features you can break, not only #pragma once. The normal C++ way to manage such breakage is to let the compiler issue an optional warning in doubtful instances. After all, C++ is purposely designed to let one program unsafely and/or unportably when one wishes to do so. Besides, the unsafety and/or unportability of #pragma once is pretty minimal. It just isn't that easy to abuse.
Why is #pragma once excluded from the standard when other abusable but useful language features are typically included? Is there something special about #pragma once?
Also, where can one read the recent deliberations of the standards committee in the matter? Has some committee member, or committee follower, published a recent summary of the debate?
There are a few simple reasons:
It is harder than generally assumed to implement and specify it. The argument that it is implemented doesn't hold much water as the implementations generally do not deal with approaches to subvert the feature.
Committee time is much more reasonably spent on working on modules which make most of the preprocessor unnecessary than trying to improve something we want to get rid of.
There is a simple work-around around (include guards) for the absence of #pragma once, i.e., it isn't considered a problem.
It seems, existing implementations actually do behave different which seems to be the root of one of the recent discussions. Of course, this means that standardization would be good but then it immediately starts conflicting with 2. and the discussions won't be simple because different parties would want their respective behavior to be retained.
I didn't do a too thorough search but I didn't see a proposal, either: if nobody writes a proposal [and lobbies it through the process] nothing will be standardized. That said, I'd fully expect the reasons given above to stop a proposal to add #pragma once have a sufficient majority for it to be stopped quite quickly.
There was a recent discussion on the proposals mailing list (see isocpp.org for how to sign up; I can't get to this site at the moment, though). I didn't follow it too thoroughly, though. Quickly browsing over it I saw the four reasons given above (the forth I added after browsing).
Here are some references from the recent mailing list discussion:
Is #pragma once part of the standard?
Why isn't C/C++s #pragma once standard?
modules proposal
From my understanding, #pragma once is an implementation specific instance of the standard #pragma directive as described in Section §16.6 of the Standard (draft):
16.6 Pragma directive [cpp.pragma]
A preprocessing directive of the form
# pragma pp-tokens opt new-line causes the implementation to behave in an implementation-
defined manner. The behavior might cause translation
to fail or cause the translator or the resulting program to behave in
a non-conforming manner. Any pragma that is not recognized by the
implementation is ignored.
Having pragma once standardized would introduce quite a bit of complexity.
Give also a look here: https://stackoverflow.com/a/1696194/2741329