What is the difference between <cstdint> and <tr1/cstdint>? (apart from that one puts things in namespace std:: and the other in std::tr1::)
Since this stuff isn't standard yet I guess it's compiler specific so I'm talking about gcc. To compile with the non-tr1 one I must compile with -std=c++0x, but there is no such restriction when using tr1.
Is the answer perhaps that there is none but you can't go around adding things to std:: unless there, well, standard. So until c++0x is standardised an error must be issued using <cstdint> but you dont need to worry when adding to the tr1:: namespace, which makes no claim to things in it being standard? Or is there more to this?
Thanks.
p.s - If you read "std" as standard, as I do, I do apologise for the overuse of the word in this Q.
At least as far as I know, there was no intent to change <cstdint> between TR1 and C++0x. There's no requirement for #includeing <cstdint> to result in an error though -- officially, it's nothing more or less than undefined behavior. An implementation is allowed to specify exact behavior, and in this case it does.
I think you've got it. On my system, they're very similar, but with different macro logic. For instance, /usr/include/c++/4.4/tr1/cstdint has:
# define _GLIBCXX_BEGIN_NAMESPACE_TR1 namespace tr1 {
# define _GLIBCXX_END_NAMESPACE_TR1 }
# define _GLIBCXX_TR1 tr1::
but /usr/include/c++/4.4/cstdint has:
# define _GLIBCXX_BEGIN_NAMESPACE_TR1
# define _GLIBCXX_END_NAMESPACE_TR1
# define _GLIBCXX_TR1
So if it's being included as <cstdint> the TR1 namespace is simply defined into oblivion.
<tr1/cstdint> is defined, as name suggests, in TR1, while <cstdint> is defined in c++0x.
From gcc manual, -std=c++0x is needed to enable experimental features that are likely to be included in C++0x. However, <tr1/cstdint> is defined in TR1, not c++0x, so -std=c++0x is no needed.
The following is gcc manual for -std=c++0x for your reference.
The working draft of the upcoming ISO C++0x standard. This
option enables experimental features that are likely to be
included in C++0x. The working draft is constantly changing,
and any feature that is enabled by this flag may be removed
from future versions of GCC if it is not part of the C++0x
standard.
Related
In what cases should we include cassert?
In short, don't use it; use <assert.h>.
C++11 removed any formal guarantee of a "c...." header not polluting the global namespace.
It was never an in-practice guarantee, and now it's not even a formal guarantee.
Hence, with C++11 there is no longer any conceivable advantage in using the "c...." header variants, while there is the distinct and clear disadvantage that code that works well with one compiler and version of that compiler, may fail to compile with another compiler or version, due to e.g. name collisions or different overload selection in the global namespace.
SO, while cassert was pretty meaningless in C++03 (you can't put a macro in a namespace), it is totally meaningless -- even as a special case of a general scheme -- in C++11.
Addendum, Dec 22 2013:
The standard defines each C++ C header <X.h> header in terms of the <cX> header, which in turn is defined in terms of the corresponding C library header.
C++11 §D.5/2:
“Every C header, each of which has a name of the form name.h, behaves as if each name placed in the standard library namespace by the corresponding cname header is placed within the global namespace scope.”
C++11 §D.5/3 (non-normative example):
“The header <cstdlib> assuredly provides its declarations and definitions within the namespace std. It may also provide these names within the global namespace. The header <stdlib.h> assuredly provides the same declarations and definitions within the global namespace, much as in the C Standard. It may also provide these names within the namespace std.”
Stack Overflow user C.R.’s comment made me aware that some versions of g++, such as MinGW g++ 4.7.2, are quite non-standard with respect to the <X.h> headers, lacking the overloads of e.g. sin that the C++ standard requires:
I already knew that MinGW g++ 4.7.2 also entirely lacks functions such as swprintf, and that it has ditto shortcomings in the pure C++ library such as lacking C++11 std::to_string. However, the information about it lacking the C function overloads was new to me.
In practice the lacking overloads with g++ means
ignoring the g++ issue, or
avoiding using the missing g++ overloads,
e.g. using only double sin( double ), or
using the std namespace overloads
(one then needs to include <cmath> to guarantee their presence with g++).
In order to use the g++ std namespace overloads unqualified, one practical approach is to define headers wrappers for this compiler. I've used that approach to address g++ shortcomings wrt. to the printf family. For as David Wheeler once remarked, “All problems in computer science can be solved by another level of indirection”…
Then things can be arranged so that standard code that uses g++'s missing overloads, also compiles with g++. This adjusts the compiler to the standard, with a fixed amount of code.
Just like any other header file, you #include <cassert> when you use something declared in that header file, such as assert().
See an easily accessible reference
#include <iostream>
// uncomment to disable assert()
// #define NDEBUG
#include <cassert>
int main()
{
assert(2+2==4);
std::cout << "Execution continues past the first assert\n";
assert(2+2==5);
std::cout << "Execution continues past the second assert\n";
}
assert.h defines one macro function that can be used as a standard debugging tool.
Are there any predefined macros for C++ in order the code could identify the standard?
e.g. Currently most compilers puts "array" into "tr1" folder but for C++11 it would be part of STL. So currently
#include <tr1/array>
but c++11
#include <array>
What is the predefined macros for 03 standard and 11 standard in order I can use #ifdef to identify?
Also, I suppose there are macros for C90 and C99?
Thanksx
From Stroustrup's C++11 FAQ
In C++11 the macro __cplusplus will be set to a value that differs from (is greater than) the current 199711L.
You can likely test it's value to determine if it's c++0x or not then.
Nitpick...
Your particular issue does not depend on your compiler, it depends on the Standard Library implementation.
Since you are free to pick a different Standard Library that the one provided by your compiler (for example, trying out libc++ or stlport), no amount of compiler specific information will help you here.
Your best bet is therefore to create a specific header file yourself, in which you will choose either one or the other (depending on a build option).
// array.hpp
#ifdef STD_HAS_TR1_ARRAY_HEADER
#include <tr1/array>
#else
#include <array>
#endif
You then document the compiler option:
Passing -DSTD_HAS_TR1_ARRAY_HEADER will mean that std::tr1::array is defined in <tr1/array> instead of the default <array>.
And you're done.
From the draft N3242:
16.8 Predefined macro names [cpp.predefined]
...
The name _ _ cplusplus is defined to the value 201103L when
compiling a C++ translation unit. 155)
...
155) It is intended that future versions of this standard will
replace the value of this macro with a greater value.
Non-conforming compilers should use a value with at most five
decimal digits.
Ultimately, you're going to have to use compiler-specific information. At least, until C++0x becomes more widespreadly implemented. You basically need to pick driver versions that implement something and test compiler-specific macros on them.
The Boost.Config library has a number of macros that can help you.
I tried to compile this:
enum class conditional_operator { plus, or, not };
But apparently GCC (4.6) thinks these are special, while I can't find a standard that says they are (neither C++0x n3290 or C99 n2794). I'm compiling with g++ -pedantic -std=c++0x. Is this a compiler convenience? How do I turn it off? Shouldn't -std=c++0x turn this "feature" off?
PS: Hmmm, apparently, MarkDown code formatting thinks so too...
Look at 2.5. They are alternative tokens for || and !.
There is a bunch of other alternative tokens BTW.
Edit: The rationale for their inclusion is the same as the one of trigraphs: allow the use of non ASCII character sets. The committee has tried to get rid of them (at least of trigraphs, I don't remember for alternative tokens), and has met opposition of people (mostly IBM mainframe users) which are using them.
Edit for completeness: as other have make the remarks, plus isn't in that class and should not be a problem unless you are using namespace std.
These are actually defined as alternative tokens (and reserved) oddly enough, as alternative representations for operators. I believe this was originally to aid people who were using keyboards which made the relevant symbols hard to produce, although this seems a pretty poor reason to add extra keywords to the language :(
There may be a GCC compiler option to disable them, but I'm not sure.
(As mentioned in comments, plus should be okay unless you're using the std namespace.)
or and not are alternative representations of || and ! respectively. You can't turn them off and you can't use these tokens for anything else, they are part of the language (current C++, not even just C++0x). ( See ISO/IEC 14882:2003 2.5 [lex.digraph] and 2.11 [lex.key] / 2. )
You should be safe with plus unless you use using namespace std; or using std::plus;.
The Standard lists keywords in 2.11. There's also a list of alternative representations separate from the keyword list that is reserved and can't be used otherwise, but aren't keywords. and and or are on that list. Section 17.4.3 describes restrictions on programs that use libraries, and 17.4.3.1.3 describes that names declared with external linkage in a header are reserved both in std:: and the global namespace.
In other words, you don't have to go to C++0x to have those problems. and and or are already reserved, and header <functional> contains plus as a templated struct type, and plus is therefore off-limits if <functional> is directly or indirectly #included.
I'm not sure dumping that much stuff into the global namespace was really wise, but that's what the standard says.
It is an year 1995 amendment to the C90 standard. Probably a compiler may choose on how to behave on this. GCC probably includes the header as part of the standard library. With microsoft it doesn't and you have to include the iso646.h.
Here is a link to wikipedia regarding this.
Are all the functions in a conformant C++98/03/0x implementation completely C99 conformant?
I thought C++0x added some C99 (language) features, but never heard or read anything definitive about the C library functions.
Just to avoid any confusion, I'm talking about a C++ program using functions declared in the <c*> header set.
Thanks.
Most of the C99 standard library has been imported in C++0X but not all. From memory, in what wasn't imported there are
<ctgmath> simply includes <ccomplex> and <cmath>,
<ccomplex> behaves as if it included <complex>
<cmath> has quite a few adjustment (providing overload and template functions completing the C99 provided one)
Some other headers (<cstdbool>, <iso646.h>, ...) have adjustments to take differences between language into account (bool is primitive in C++, a macro provided by <stdbool.h> in C for instance), but nothing of the scope of the math part.
The headers <xxx.h> whose <cxx> form doesn't behaves as the C99 version simply declares the content of <cxxx> in the global namespace, they aren't nearer of the C99 <xxx.h> content.
A related thing: C++0X provides some headers in both cxxx and xxx.h forms which aren't defined in C99 (<cstdalign> and <cuchar>, the second one is defined in a C TR)
(I remembered that a bunch of mathematical functions from C99 had been put in TR1 but not kept in C++0X, I was mistaken, that bunch of mathematical functions weren't part of C99 in the first place).
No. C++03 is aligned with ANSI C89/ISO C90, not C99.
The upcoming C++0x standard is expected to be aligned to some degree with C99. See paragraph 17.6.1.2 in the current draft which lists ccomplex, cinttypes, cstdint etc. Note that, as AProgrammer mentions, some headers aren't exactly the same; further, that the header cuchar is aligned with the C Technical Report 19769 rather than C99.
I was just checking an answer and realized that CHAR_BIT isn't defined by headers as I'd expect, not even by #include <bitset>, on newer GCC.
Do I really have to #include <climits> just to get the "functionality" of CHAR_BIT?
As you may know, whether or not an implementation wants to include other headers is unspecified. It's allowed, but not mandated. (§17.4.4.1) So you either have to be explicit or know your guarantees.
The only time a C++ header must include another is if it requires a definition in another. For example, <bitset> is required to include <cstddef> for std::size_t, as this is explicitly stated in the standard. (§23.3.5, for this example)
For a counter-example, consider <limits>. It may include <climits> and define the values for numeric_limits in terms of the macros within, and it often does since that's easiest for an implementation. But all the standard says is things like: "Equivalent to CHAR_MIN, SHRT_MIN, FLT_MIN, DBL_MIN, etc." but doesn't say it must to be implemented in terms of those, which means <climits> doesn't have to be included.
So the only way you can be guaranteed that a CHAR_BIT is defined is by including <climits> or some other header where it's explicitly stated it must include it. And as far as I can tell, none have to; an implementation is free to just hard-code the value everywhere it's needed, for example, or include <limits> and use std::numeric_limits<unsigned char>::digits (which is equivalent).
Unless you're programming for DSPs, a better name for it is 8. POSIX and Windows both require CHAR_BIT==8, and that covers pretty much every interesting target conceivable except DSPs. (I would not consider 1970s mainframes a potentially interesting target anymore, but perhaps someone will beg to differ...)
<climits> is where CHAR_BIT is required to be by the C++ standard. Even if you happened to find it in <bitset>, its not guaranteed to be there, so you're better off going straight to the source. Its not like there's something wrong with including <climits>.
Define "newer". A random Linux system gave me these results:
~> gcc --version
gcc (GCC) 4.1.2 (Gentoo 4.1.2)
[snip]
~> grep CHAR_BIT /usr/include/*.h
/usr/include/limits.h:# define CHAR_BIT 8
Doesn't that qualify? In C, I think it should always be enough to include limits.h to get CHAR_BIT.
Yes, you should #include <limits.h> to get the definition of CHAR_BIT. It's not a bottleneck at all for your program (see comment below), and later down the line you might find yourself porting to different platforms. Not every implementation has the value of 8
I keep the following function in my .profile to check compiler defined constants. On my system __CHAR_BIT__ is in there, which means no header is required if you can live with the double underscore form, which may only work with gcc.
defines ()
{
touch /tmp/defines.h;
cpp -dD "$#" /tmp/defines.h | awk '$1!="#"{COUNT+=1;print;}END{printf("count %d\n",COUNT);}' | sort;
rm /tmp/defines.h
}