how to enable __BEGIN_NAMESPACE_STD in stdlib.h - c++

I am now trying to build a c++ library in linux with cmake. If I do not enable -std=c++0x option, I always get compilation errors error: 'div_t' was not declared in this scope for the following codes:
int xPos;
div_t divResult;
divResult = div(xPos,8);
Then if I enable -std-c++0x options with cmake: set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x", then everything is fine. However, in my library I did not use any c++0x features, so I am reluctant to set std=c++0x option. So I search the head file that defines div_t and find it is defined in stdlib.h within the following MACRO:
__BEGIN_NAMESPACE_STD
typedef struct
{
int quot;
int rem;
} div_t;
....
....
__END_NAMESPACE_STD
It seems to me that if I can enable these macros I can build the library without enabling c++0x feature. So my question is what I can do in this situation.
By the way, I can build the library very well without enabling c++0x feature if only g++4.4 is installed in the linux machine. When I also install g++4.6 and make g++4.6 the default g++, then the compilation error began to occur. Even I changed the default g++ to g++4.4, the compilation error still exists if I do not enable c++0x feature.

The macros expand to namespace std { and } respectively if the code is pulled in through a C++ standard library header. This leads me to believe that you're not #including stdlib.h directly (which is good!).
Earlier versions of libstdc++ pulled symbols from C legacy headers into the global namespace even if the C++ versions of these headers (e.g. <cstdlib> instead of <stdlib.h>) were used; newer ones place them only in namespace std.
The cleanest way to fix this is to
#include <cstdlib>
in all translation units where the problem occurs and to use std::div instead of div. If you're lazy, you can also
#include <stdlib.h>
in all translation units that use div, but mixing C and C++ is always icky. Not terribad in this particular case, though.

Related

Using POSIX feature test macros with C++

When I'm building POSIX C programs, I want to be portable and use only POSIX or standard C library functions. So, for example, with gcc or clang, I build like this:
gcc -std=c99 -D_XOPEN_SOURCE=600
Setting the standard to C99 removes all extensions, then _XOPEN_SOURCE exposes POSIX interfaces. I no longer have the environment polluted with extensions from GNU, BSD, etc.
However, the waters seem murkier with C++. I want to do this:
g++ -std=c++14 -D_XOPEN_SOURCE=600
This has worked fine for me on various operating systems: Linux/glibc, Haiku, MinGW, macOS, at least. But apparently, there are problems with POSIX feature test macros and C++. Oracle docs have this to say:
C++ bindings are not defined for POSIX or SUSv4, so specifying feature test macros such as _POSIX_SOURCE, _POSIX_C_SOURCE, and _XOPEN_SOURCE can result in compilation errors due to conflicting requirements of standard C++ and these specifications.
While I don't have a copy of Oracle Solaris, I am seeing issues with FreeBSD and OpenBSD.
On FreeBSD:
#include <iostream>
int main() { }
$ clang++ -std=c++14 -D_POSIX_C_SOURCE=200112L t.cpp
In file included from t.cpp:1:
In file included from /usr/include/c++/v1/iostream:37:
In file included from /usr/include/c++/v1/ios:215:
/usr/include/c++/v1/__locale:631:16: error: use of undeclared identifier 'isascii'
return isascii(__c) ? (__tab_[static_cast<int>(__c)] & __m) !=0 : false;
...
(This builds fine with _XOPEN_SOURCE=600). C++ headers on FreeBSD use isacii, a non-standard function, so it's not exposed when _POSIX_C_SOURCE is set.
Or on OpenBSD:
#include <fstream>
int main() { }
$ clang++ -std=c++14 -D_XOPEN_SOURCE=600 t.cpp
In file included from t.cpp:1:
In file included from /usr/include/c++/v1/fstream:183:
In file included from /usr/include/c++/v1/ostream:138:
In file included from /usr/include/c++/v1/ios:215:
In file included from /usr/include/c++/v1/__locale:32:
In file included from /usr/include/c++/v1/support/newlib/xlocale.h:25:
/usr/include/c++/v1/support/xlocale/__strtonum_fallback.h:23:64: error: unknown type name 'locale_t'
char **endptr, locale_t) {
Presumably <locale.h> isn't getting included somewhere it “should” be.
The worrisome conclusion I'm drawing is that you can't portably have a POSIX C++ environment that is free of non-POSIX extensions. These examples work fine on OpenBSD and FreeBSD if the feature test macros are removed. That looks to be because the BSD headers expose BSD functions unless in standard C mode, but they do not care about standard C++ mode (they explicitly check whether macros corresponding to C89, C99, or C11 are set). glibc looks to be the same: it still exposes non-standard C functions in standard C++ mode, so perhaps it's only a matter of time before I run into a build error there.
So the actual question is this: can you write portable POSIX C++ code which does not expose platform-specific functionality? Or if I'm targeting POSIX with C++ should I just not set any feature test macros and hope for the best?
EDIT:
I got to thinking about the implications of this (as in, why do I care?), and the following program, I think, illustrates it. This is Linux/glibc:
#include <ctime>
int dysize;
$ g++ -c -std=c++14 t.cpp
t.cpp:2:5: error: ‘int dysize’ redeclared as different kind of entity
2 | int dysize;
| ^~~~~~
In file included from t.cpp:1:
/usr/include/time.h:262:12: note: previous declaration ‘int dysize(int)’
262 | extern int dysize (int __year) __THROW __attribute__ ((__const__));
This is the standard <ctime> header, which is does not include anything called dysize. That's an old SunOS function that glibc includes for compatibility. A C program built with -std=c99 won't expose it, but C++ always does. And there's no real way of knowing which non-reserved identifiers will be used by various implementations. If -std=c++14 caused non-standard identifiers to be hidden, that would avoid this problem, but it doesn't, and I can't see a way around that.
Which might imply that the feature test macro is a red herring: the source of the problem is that, on at least some real-world implementations, C++ compilers are exposing symbols they're not supposed to.
My suggestion is to build a toolchain, and work from that with the libraries, includes, the correct compiler (perhaps a stripped version that can only use POSIX libraries, includes, etc).
To make it portable, generally you would build the application using static linkers. Other linker options may be necessary that point specifically or include your toolchain environment paths.
And if you're using POSIX threads, you may need -pthread.
I see that you are using system-wide headers and libraries, when really, you probably want a specific to your POSIX application toolchain, to avoid contamination.

Does clang provide an unlink implementation?

I am trying to compile a library using clang. The library makes calls to 'unlink', which is not defined by clang:
libmv/src/third_party/OpenExif/src/ExifImageFileWrite.cpp:162:17: error: use of undeclared identifier 'unlink'; did you mean 'inline'?
unlink( mTmpImageFile.c_str() ) ;
My question is, what is the clang equivalent of unlink? As I see it, the path forward would be to #define unlink somewhere with an equivalent routine.
There is no "Clang equivalent". Neither GCC nor Clang have ever been responsible for defining unlink, though they do probably distribute the POSIX headers which do (I don't recall specifically where POSIX headers come from).
Unfortunately, this appears to be a bug with the library you're using; the OpenExif developers failed to include the correct headers. Different C++ implementations may internally #include various headers for their own purposes, which has apparently masked this bug on your previous toolchain.
You can hack your copy and/or submit a patch to add:
#include <unistd.h>

-O1/2/3 with -std=c++1y/11/98 - If <cmath> is included i'm getting error: '_hypot' was not declared in this scope

I've just updated MinGW using mingw-get-setup and i'm unable to build anyting that contains <cmath> header if I use anything larger than -O0 with -std=c++1y. (I also tried c++11 and c++98) I'm getting errors like this one:
g++.exe -pedantic-errors -pedantic -Wextra -Wall -std=c++1y -O3 -c Z:\Projects\C++\L6\src\events.cpp -o obj\src\events.o
In file included from z:\lander\mingw\lib\gcc\mingw32\4.8.1\include\c++\cmath:44:0,
from Z:\Projects\C++\L6\src\utils.h:4,
from Z:\Projects\C++\L6\src\events.cpp:10:
z:\lander\mingw\include\math.h: In function 'float hypotf(float, float)':
z:\lander\mingw\include\math.h:635:30: error: '_hypot' was not declared in this scope
{ return (float)(_hypot (x, y)); }
Is something wrong on my side?
Or version at mingw repo is bugged? And if so, is there any quick fix for this header?
To avoid any further speculation, and downright bad suggestions such as using #if 0, let me give an authoritative answer, from the perspective of a MinGW project contributor.
Yes, the MinGW.org implementation of include/math.h does have a bug in its inline implementation of hypotf (float, float); the bug is triggered when compiling C++, with the affected header included (as it is when cmath is included), and any compiler option which causes __STRICT_ANSI__ to become defined is specified, (as is the case for those -std=c... options noted by the OP). The appropriate solution is not to occlude part of the math.h file, with #if 0 or otherwise, but to correct the broken inline implementation of hypotf (float, float); simply removing the spurious leading underscore from the inline reference to _hypot (float, float), where its return value is cast to the float return type should suffice.
Alternatively, substituting an equivalent -std=gnu... for -std=c... in the compiler options should circumvent the bug, and may offer a suitable workaround.
FWIW, I'm not entirely happy with MinGW.org's current implementation of hypotl (long double, long double) either; correcting both issues is on my punch list for the next release of the MinGW runtime, but ATM, I have little time to devote to preparing this.
Update
This bug is no longer present in the current release of the MinGW.org runtime library (currently mingwrt-3.22.4, but fixed since release 3.22). If you are using anything older than this, (including any of the critically broken 4.x releases), you should upgrade.
As noted by Keith, this is a bug in the MinGW.org header.
As an alternative to editing the MinGW.org header, you can use MinGW-w64, which provides everything MinGW.org provides, and a whole lot more.
For a list of differences between the runtimes, see this wiki page.
MinGW uses gcc and the Microsoft runtime library. Microsoft's implementation support C90, but its support for later versions of the C standard (C99 and C11) is very poor.
The hypot function (along with hypotf and hypotl) was added in C99.
If you're getting this error with a program that calls hypot, such as:
#include <cmath>
int main() {
std::cout << std::hypot(3.0, 4.0)) << '\n';
}
then it's just a limitation of the Microsoft runtime library, and therefore of MinGW. If it occurs with any program that has #include <cmath>, then it's a bug, perhaps a configuration error, in MinGW.

How to detect the libstdc++ version in Clang?

I would like to write a "portable" C++ library in Clang. "Portable" means that I detect (in C preprocessor) what C++ features are available in the compilation environment and use these features or provide my workarounds. This is similar to what Boost libraries are doing.
However, the presence of some features depends not on the language, but on the Standard Library implementation. In particular I am interested in:
type traits (which of them are available and with what spelling)
if initializer_list being constexpr.
I find this problematic because Clang by default does not use its own Standard Library implementation: it uses libstdc++. While Clang has predefined preprocessor macros __GNUC__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__, they are hardcoded to values 4, 2, 1 respectively, and they tell me little about the available libstdc++ features.
How can I check in Clang preprocessor what version of libstdc++ it is using?
Clang does come with its own standard library implementation, it's called libc++. You can use it by adding -stdlib=libc++ to your compile command.
That being said, there are various ways to check Clang/libstdc++ C++ support:
Clang has the __has_feature macro (and friends) that can be used to detect language features and language extenstions.
Libstdc++ has its own version macros, see the documentation. You'll need to include a libstdc++ header to get these defined though.
GCC has its version macros which you already discovered, but those would need to be manually compared to the documentation.
And also, this took me 2 minutes of googling.
This is what I think would help. It prints the value of the _LIBCPP_VERSION macro:
#include <iostream>
#include <string>
using namespace std;
int main(int argc, const char * argv[])
{
cout<<"Value = "<<_LIBCPP_VERSION<<endl;
return 0;
}
Compile it again the version of clang you want the info for.

Clang >= 3.3 in c++1y mode cannot parse <cstdio> header

I have a project that correctly compiles and runs under g++ 4.8.1 and clang >= 3.3 in c++11 mode. However, when I switch to the experimental -std=c++1y mode, clang 3.3 (but not g++) chokes on the <cstdio> header that is indirectly included by way of Boost.Test (so I cannot easily change it myself)
// /usr/include/c++/4.8/cstdio
#include <stdio.h>
// Get rid of those macros defined in <stdio.h> in lieu of real functions.
// ...
#undef gets
// ...
namespace std
{
// ...
using ::gets; // <-- error with clang++ -std=c++1y
// ...
}
with the following error message:
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/cstdio:119:11:
error: no member named 'gets' in the global namespace
On this tutorial on how to set up a modern C++ environment, a similar lookup problem with max_align_t is encountered. The recommendation there is to use a sed script to surround the unknown symbols with #ifdef __clang__ macros, but that seems a fragile approach.
Setup: plain 64-bit Linux Mint 15 with
g++ (Ubuntu 4.8.1-2ubuntu1~13.04) 4.8.1
Ubuntu clang version 3.3-3~raring1 (branches/release_33) (based on
LLVM 3.3)
Questions:
what is causing this erorr? There is no __clang__ macro anywhere near the code in question, and clang in c++11 mode has no trouble at all.
Is it a language problem (does C++14 say something else than C++11 about importing C compatible symbols from the global into the std namespace)?
Do I need to change something with my include paths? (I use CMake to automatically select the header paths, and switch modes inside CMakeLists.txt)
Does clang have a switch to resolve this?
This note in the gets manpage looks relevant:
ISO C11 removes the specification of gets() from the C language, and since version 2.16, glibc header files don't expose the function declaration if the _ISOC11_SOURCE feature test macro is defined.
Probably should be
#if !_ISOC11_SOURCE
using ::gets;
#endif