Why is opensslconf.h different for each architecture? - c++
I'm writing a cross-platform C++ library which relies upon OpenSSL, which I link statically & bundle with my library for easy consumption. I would like to have a single #include directory for my library, which would obviously contain an "openssl" subdirectory.
Unfortunately, contents of the OpenSSL #include directory are different per architecture, per platform. So, for example, there are (minimally) three different versions of OpenSSL header files for iOS. Add more for TV-OS support and simulator versions. The same problem exists to different degrees on Windows & Android.
Upon closer examination, the only file that's common but different across all platforms & architectures is "opensslconf.h", and it usually only differs by a few lines, or sometimes even a single line.
For example, the tvOS version of "opensslconf.h" contains:
#ifndef OPENSSL_NO_ASYNC
# define OPENSSL_NO_ASYNC
#endif
Whereas the iOS version does not.
A more frequent difference is in the definition of RC4_INT:
// 64-bit architectures?
#define RC4_INT unsigned int
// 32-bit architectures?
#define RC4_INT unsigned char
I would like to have only ONE set of OpenSSL #includes that applies to all architectures & platforms. I don't want to have duplicates of all these files for every arch/platform, especially since there are so many variations.
My first question is if it's possible to have just one OpenSSL #include directory as I'd like? If so, which version of "opensslconf.h" should I choose, and how do I know it will work?
My second question is why this is an issue AT ALL. Why can't these platform differences be encapsulated by OpenSSL? Isn't it already keeping track of many other variables and types that change as you build for different architectures?
As a workaround, you can generate several versions of opensslconf.h (one for each of archs you plan to support), call them opensslconf-win.h, opensslconf-tvos.h, etc.
Then write an opensslconf which will contain only includes of the generated files based on platform:
#ifdef WIN32
#include opensslconf-win.h
#endif
// and so for every arch
Why is opensslconf.h different for each architecture?
opensslconf.h holds platform specific configuration and installation information. As you noted, an example of platform configuration data is RC4_INT.
Other examples of platform configuration information include the defines OPENSSL_NO_SSL2 if ./Configure no-ssl2; and OPENSSL_NO_SSL3 if ./Configure no-ssl3. Examples of installation information are the OPENSSLDIR, which holds the location of OpenSSL's configuration file openssl.conf (among other location information)
The last examples, no-ssl2, no-ssl3 and OPENSSLDIR, are specified by the user. They are not fixed for a platform.
(A related question pertains the usefulness of OPENSSLDIR in sandboxes and walled gardens, but I've never seen an answer to it. Also see CONF-less OpenSSL configuration? on the OpenSSL mailing list).
Unfortunately, contents of the OpenSSL #include directory are different per architecture, per platform... Upon closer examination, the only file that's common but different across all platforms & architectures is "opensslconf.h".
That's not exactly correct. bn.h is different, too.
I would like to have only ONE set of OpenSSL #includes that applies to all architectures & platforms... My first question is if it's possible to have just one OpenSSL #include directory as I'd like? If so, which version of "opensslconf.h" should I choose, and how do I know it will work?
Yes, its possible to have only one opensslconf.h and only one bh.h. But you will have to build it by hand, and it will work if you diligently guard the defines of interest and transcribe them without error. You cannot choose one and expect it to work for all architectures and platforms.
I've used the following technique to combine them on OS X and iOS for fat libraries. The steps are detailed at Build Multiarch OpenSSL on OS X, but I'm guessing you know what's going on by looking at it.
$ cat $HOME/ssl/include/openssl/opensslconf.h
#ifndef OPENSSL_MULTIARCH_CONF_HEADER
#define OPENSSL_MULTIARCH_CONF_HEADER
#if __i386 || __i386__
# include "opensslconf-x86.h"
#elif __x86_64 || __x86_64__ || __amd64 || __amd64__
# include "opensslconf-x64.h"
#else
# error Unknown architecture
#endif
#endif /* OPENSSL_MULTIARCH_CONF_HEADER */
and:
$ cat $HOME/ssl/include/openssl/bn.h
#ifndef OPENSSL_MULTIARCH_BN_HEADER
#define OPENSSL_MULTIARCH_BN_HEADER
#if __i386 || __i386__
# include "bn-x86.h"
#elif __x86_64 || __x86_64__ || __amd64 || __amd64__
# include "bn-x64.h"
#else
# error Unknown architecture
#endif
#endif /* OPENSSL_MULTIARCH_BN_HEADER */
My second question is why this is an issue AT ALL. Why can't these platform differences be encapsulated by OpenSSL? Isn't it already keeping track of many other variables and types that change as you build for different architectures?
I've never seen a definitive answer on the subject. Maybe you should ask on one of the OpenSSL mailing lists, like openssl-dev.
My guess is, there's too many platforms and configuration options to stuff them all in one opensslconf.h (and one bn.h). Here's the short list of built-in targets. wc -l tells us there are 144 of them.
The list does not include the various configuration options, like enable-ec_nistp_64_gcc_128 for certain processors (it has nothing to do with NIST or FIPS). Also see Compilation and Installation | Configure Options on the OpenSSL wiki.
$ ./Configure LIST
Configuring OpenSSL version 1.1.1-dev (0x10101000L)
BS2000-OSD
BSD-generic32
BSD-generic64
BSD-ia64
BSD-sparc64
BSD-sparcv8
BSD-x86
BSD-x86-elf
BSD-x86_64
Cygwin
Cygwin-i386
Cygwin-i486
Cygwin-i586
Cygwin-i686
Cygwin-x86
Cygwin-x86_64
DJGPP
MPE/iX-gcc
OS390-Unix
QNX6
QNX6-i386
UEFI
UWIN
VC-CE
VC-WIN32
VC-WIN64A
VC-WIN64A-masm
VC-WIN64I
aix-cc
aix-gcc
aix64-cc
aix64-gcc
android
android-armeabi
android-mips
android-x86
android64
android64-aarch64
android64-mips64
android64-x86_64
bsdi-elf-gcc
cc
darwin-i386-cc
darwin-ppc-cc
darwin64-debug-test-64-clang
darwin64-ppc-cc
darwin64-x86_64-cc
debug
debug-erbridge
debug-linux-ia32-aes
debug-linux-pentium
debug-linux-ppro
debug-test-64-clang
dist
gcc
haiku-x86
haiku-x86_64
hpux-ia64-cc
hpux-ia64-gcc
hpux-parisc-cc
hpux-parisc-gcc
hpux-parisc1_1-cc
hpux-parisc1_1-gcc
hpux64-ia64-cc
hpux64-ia64-gcc
hpux64-parisc2-cc
hpux64-parisc2-gcc
hurd-x86
ios-cross
ios64-cross
iphoneos-cross
irix-mips3-cc
irix-mips3-gcc
irix64-mips4-cc
irix64-mips4-gcc
linux-aarch64
linux-alpha-gcc
linux-aout
linux-arm64ilp32
linux-armv4
linux-c64xplus
linux-elf
linux-generic32
linux-generic64
linux-ia64
linux-mips32
linux-mips64
linux-ppc
linux-ppc64
linux-ppc64le
linux-sparcv8
linux-sparcv9
linux-x32
linux-x86
linux-x86-clang
linux-x86_64
linux-x86_64-clang
linux32-s390x
linux64-mips64
linux64-s390x
linux64-sparcv9
mingw
mingw64
nextstep
nextstep3.3
purify
qnx4
sco5-cc
sco5-gcc
solaris-sparcv7-cc
solaris-sparcv7-gcc
solaris-sparcv8-cc
solaris-sparcv8-gcc
solaris-sparcv9-cc
solaris-sparcv9-gcc
solaris-x86-gcc
solaris64-sparcv9-cc
solaris64-sparcv9-gcc
solaris64-x86_64-cc
solaris64-x86_64-gcc
tru64-alpha-cc
tru64-alpha-gcc
uClinux-dist
uClinux-dist64
unixware-2.0
unixware-2.1
unixware-7
unixware-7-gcc
vms-alpha
vms-alpha-p32
vms-alpha-p64
vms-ia64
vms-ia64-p32
vms-ia64-p64
vos-gcc
vxworks-mips
vxworks-ppc405
vxworks-ppc60x
vxworks-ppc750
vxworks-ppc750-debug
vxworks-ppc860
vxworks-ppcgen
vxworks-simlinux
The same problem exists to different degrees on Windows & Android...
I'm thinking "not really". You can't build fat libraries on those platforms, so the problem does not really exist. You still need to specify a specific path for a platform specific library, so what's the problem with headers?
There's some handwaiving since I recall seeing something about it in Linux (I can't find the reference at the moment), but Android does not have it.
Related, you can see a comprehensive list of platform and user configuration options with:
$ openssl version -a
OpenSSL 1.0.2g 1 Mar 2016
built on: reproducible build, date unspecified
platform: debian-amd64
options: bn(64,64) rc4(16x,int) des(idx,cisc,16,int) blowfish(idx)
compiler: cc -I. -I.. -I../include -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REE
NTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -g -O2 -fstack-protector-stron
g -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-Bsymboli
c-functions -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32
_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_
ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIR
LPOOL_ASM -DGHASH_ASM -DECP_NISTZ256_ASM
OPENSSLDIR: "/usr/lib/ssl"
Alien_AV gave the answer but I'm posting this in order to follow that up with some code, which can only be displayed properly in an answer.
Below is the solution that worked for me. I still think it's something that should be an integrated part of the OpenSSL build process rather than something every single user may need to do.
It should be fairly easy to implement, since there's nothing platform-specific in this file -- one just has to know all the supported platforms.
#ifndef __OPENSSLCONF_H__
#define __OPENSSLCONF_H__
// ************************************************ ANDROID
#if defined(__ANDROID__)
#if defined (__i386__)
#include <openssl/opensslconf-android-x86.h>
#elif defined (__arm__)
#include <openssl/opensslconf-android-arm.h>
#elif defined (_MIPS_ARCH)
// Unsupported
#endif
// ************************************************ IOS
// Reference: http://nadeausoftware.com/articles/2012/01/c_c_tip_how_use_compiler_predefined_macros_detect_operating_system
#elif defined(__APPLE__) && defined(__MACH__)
#include <TargetConditionals.h>
#ifdef TARGET_OS_WATCHOS
// Unsupported
#elif TARGET_OS_TV
#if TARGET_OS_SIMULATOR
#include <openssl/opensslconf-atv-sim64.h>
#elif
#include <openssl/opensslconf-atv-arm64.h>
#endif
#elif TARGET_OS_IOS
#if TARGET_OS_SIMULATOR
#include <openssl/opensslconf-ios-sim32.h>
#include <openssl/opensslconf-ios-sim64.h>
#else
#include <openssl/opensslconf-ios-arm32.h>
#include <openssl/opensslconf-ios-arm64.h>
#endif
#endif
// ************************************************ WINDOWS
// Reference: https://msdn.microsoft.com/en-us/library/b0084kay(v=vs.120).aspx
#elif defined(_WIN32)
#if defined(_M_X64)
#include <openssl/opensslconf-win64.h>
#else
#include <openssl/opensslconf-win32.h>
#endif
#endif
#endif // __OPENSSLCONF_H__
Related
multiple definition of `glwMDrawingAreaWidgetClass'
I'm porting an application to Linux, written on IRIX (and successfully ported to AIX (years ago)). One of the issues I found was glwMDrawingAreaWidgetClass is not supported on Linux (use glwDrawaingAreaWidgeClass no 'M'). So I switched it. I built the app on Ubuntu 10.10. Now I'm trying to build on 14.04 (and also tried on 15.10). But I get the following error. Multiple Definition of glwMDrawingAreaWidgetClass. I get this for a dozen (or so) files. The thing is .. I am NOT using it. So in good debugging style I asked: what has changed. The makefiles are the same, the files are the same. It must be the libraries or the compiler (G++). I have looked everywhere (google search) to find this error. I have not found a solution (or even the problem). Has anyone noticed this? I suspect its a library issue. I am using the following libraries to link:.. -lxvw -ldot -lmath -lXm -lXt -lXext -lX11 -lglut -lGLU -lGL -lGLw -lm -lpthread. The first three are mine. I tried removing glut, GLU, GL, GLw. The either make no difference, or I can't link. I suspect it is GLw. I am linking statically. Thanks
Cause The variable glwMDrawingAreaWidgetClass is being defined in each object file that imports: #include <Xm/Xm.h> #include <GL/GLwMDrawA.h> Is is defined in /usr/include/GL/GLwDrawA.h: GLAPI WidgetClass glwMDrawingAreaWidgetClass; GLAPI was extern in RHEL6, before this commit to mesa 3D. As you can see, GLAPI is a macro that is defined as __attribute__((visibility("default"))) when __GCC__ > 4 in /usr/include/gl.h. Fix I don't know if this change was proper on glwMDrawingAreaWidgetClass, but modifying /usr/include/GL/gl.h to comment out the macro definition of GLAPI to __attribute__((visibility("default"))) will allow a statement later in the file to set it to extern. This allowed my code to compile. #elif (defined(__GNUC__) && __GNUC__ >= 4) || (defined(__SUNPRO_C) && (__SUNPRO_C >= 0x590)) -# define GLAPI __attribute__((visibility("default"))) +// define GLAPI __attribute__((visibility("default"))) # define GLAPIENTRY #endif /* WIN32 && !CYGWIN */ ... #ifndef GLAPI #define GLAPI extern #endif Extra I've made a sample git repository to demonstrate the issue with minimal code, just two object files. I have emailed Dan Nicholson in hopes that he will shed more light on the issue than I can.
I had the same problem with some code ported from IRIX some time ago. It compiles and links just fine with RedHat 6, but not 7. The only relevant difference, as far as I can tell, is that RedHat 6 uses gcc 4.4, while RedHat 7 uses gcc 4.8.
Use emscripten from Clang compiled executable
Is it possible to execute emcc (from emscripten) on a clang compiled executable ? I tried but the result is: ERROR root: pdfium_test: Input file has an unknown suffix, don't know what to do with it! I try that because I'm not able to find a solution to compile PDFium project with emcc, but with clang everything is fine. The reason is: Emscripten is a cross-compiler, and therefore the OS-specific macros of the host system should all be undefined when building C/C++ code. If you look at tools/shared.py, Emscripten gives special attention to -U all host-specific flags that Clang may automatically try to add in. But there is lots of Platform specific code in PDFium, so I get: #error Sorry, can not figure out target OS. Please specify _FX_OS_ macro. This macro is defined if the __linux__ macro (for example) is defined, here is the code snippet: #ifndef _FX_OS_ #if defined(__ANDROID__) #define _FX_OS_ _FX_ANDROID_ #define _FXM_PLATFORM_ _FXM_PLATFORM_ANDROID_ #elif defined(_WIN32) #define _FX_OS_ _FX_WIN32_DESKTOP_ #define _FXM_PLATFORM_ _FXM_PLATFORM_WINDOWS_ #elif defined(_WIN64) #define _FX_OS_ _FX_WIN64_DESKTOP_ #define _FXM_PLATFORM_ _FXM_PLATFORM_WINDOWS_ #elif defined(__linux__) #define _FX_OS_ _FX_LINUX_DESKTOP_ #define _FXM_PLATFORM_ _FXM_PLATFORM_LINUX_ #elif defined(__APPLE__) #define _FX_OS_ _FX_MACOSX_ #define _FXM_PLATFORM_ _FXM_PLATFORM_APPLE_ #endif #endif // _FX_OS_ #if !defined(_FX_OS_) || _FX_OS_ == 0 #error Sorry, can not figure out target OS. Please specify _FX_OS_ macro. #endif So, I tried to define manually the __linux__ macro with: emmake make -j5 BUILDTYPE=Release __linux__=1 ... but same error. Maybe it's not the good way ? Thank you in advance. EDIT: The answer of JF Bastien helps me a lot. But now I've this build error and I've any idea of what to do. If someone have an idea... clang-3.7: warning: argument unused during compilation: '-msse2' clang-3.7: warning: argument unused during compilation: '-mmmx' error: unknown FP unit 'sse' EDIT 2: Solution for above problem: remove "-msse2, -mmmx and -mfpmath" flags from v8/build/toolchain.gypi
Porting to Emscripten is the same as porting to any other platform: you have to use that's platform's own platform-specific headers. Some will have nice equivalents, and some won't. In most cases you'll need to find these chains of platform-specific #if defined(...) and add an #elif defined(__EMSCRIPTEN__), and do the right thing there. That's more complicated than it sounds: you can't do inline assembly, you can't rely on (most) platform-specific headers, ... But in some cases it's easy. Emscripten has examples which do this, and has a porting guide. For PDFium in particular, you'll have to avoid all the platform-specific font rendering, any threading-related things, and the sandboxing (security shouldn't be as big of an issue since JavaScript itself is a sandbox). You'll also have to figure out how to do file I/O, and probably want to disable all networking code. Or you could use other ports of PDFium to Emscripten.
Is it possible to determine or set compiler options from within the source code in gcc?
I have some code that requires a certain gcc compiler option (otherwise it won't compile). Of course, I can make sure in the makefile that for this particular source file the required option is set. However, it would much more helpful, if this option could be set for the respective compilation unit (or part of it) from within the source_file.cpp. I know that warning messages can be switched on or off using #pragma GCC diagnostic, but what about the -fsomething type of options? I take it from this question that this is impossible. But perhaps there is at least a way to check from within the code whether a certain -f option is on or not? Note I'm not interested in finding the compiler flags from the binary, as was asked previously, nor from the command line.
In my experience, no. This is not the way you go about this. Instead, you put compiler/platform/OS specific code in your source, and wrap it with the appropriate ifdef statements. These include: #ifdef __GNUC__ /*code for GNU C compiler */ #elif _MSC_VER /*usually has the version number in _MSC_VER*/ /*code specific to MSVC compiler*/ #elif __BORLANDC__ /*code specific to borland compilers*/ #elif __MINGW32__ /*code specific to mingw compilers*/ #endif Within this, you can have version-specific requirements and code: #ifdef __GNUC__ # include <features.h> # if __GNUC_PREREQ(4,0) // If gcc_version >= 4.0 # elif __GNUC_PREREQ(3,2) // If gcc_version >= 3.2 # else // Else # endif #else // If not gcc #endif From there, you have your makefile pass the appropriate compiler flags based on the compiler type, version, etc, and you're all set.
You can try using some #pragma. See GCC diagnostic pragmas & GCC function specific pragmas. Otherwise, develop your GCC plugin or your MELT extension and have it provide a pragma which sets the appropriate variables or compiler state inside GCC (actually cc1plus)
Where is __MWERKS__ in OS10.7?
In Photoshop CS2 SDK file SPConfig.h, the follow code get error.Obviously I needed the define of __MWERKS__. The Compiler is LLVM GCC 4.2, the SDK is OS X 10.7 . #ifdef __MWERKS__ #if !defined(__INTEL__) /* mjf was- #if defined(__MC68K__) || defined(__POWERPC__) */ #ifndef MAC_ENV #define MAC_ENV 1 #endif #endif #endif #if !defined(WIN_ENV) && !defined(MAC_ENV) #error #endif In file cdefs.h: #if defined(__MWERKS__) && (__MWERKS__ > 0x2400) I want to know how to find where it defines. Or can I just define a number to it?
Those macros are defined by the compiler itself to indicate which compiler it is (so you can write compiler-specific things). __MWERKS__ was used by Metrowerks CodeWarrior, which was discontinued in 2005, so is slightly obsolete by now. You should not define it yourself - unless you're compiling with CodeWarrior, those parts should be skipped, or the program will most likely break in unpredictable ways. Your actual problem is that your compiler & SDK combination isn't recognized as a Macintosh environment. There must be some other place that defines MAC_ENV. (I find it hard to believe that the CS2 SDK wouldn't support Apple's own compiler.) You should go search for all occurrences of MAC_ENV. Are you sure that the combination of SDKs and compiler you're using is supported? The CS2 SDK is so old it might not be, so you should also read the documentation carefully.
Which Cross Platform Preprocessor Defines? (__WIN32__ or __WIN32 or WIN32 )?
I often see __WIN32, WIN32 or __WIN32__. I assume that this depends on the used preprocessor (either one from visual studio, or gcc etc). Do I now have to check first for os and then for the used compiler? We are using here G++ 4.4.x, Visual Studio 2008 and Xcode (which I assume is a gcc again) and ATM we are using just __WIN32__, __APPLE__ and __LINUX__.
This article answers your question: C/C++ tip: How to detect the operating system type using compiler predefined macros (plus archive.org link in case it vanishes). The article is quite long, and includes tables that are hard to reproduce, but here's the essence: You can detect Unix-style OS with: #if !defined(_WIN32) && (defined(__unix__) || defined(__unix) || (defined(__APPLE__) && defined(__MACH__))) /* UNIX-style OS. ------------------------------------------- */ #endif Once you know it's Unix, you can find if it's POSIX and the POSIX version with: #include <unistd.h> #if defined(_POSIX_VERSION) /* POSIX compliant */ #endif You can check for BSD-derived systems with: #if defined(__unix__) || (defined(__APPLE__) && defined(__MACH__)) #include <sys/param.h> #if defined(BSD) /* BSD (DragonFly BSD, FreeBSD, OpenBSD, NetBSD). ----------- */ #endif #endif and Linux with: #if defined(__linux__) /* Linux */ #endif and Apple's operating systems with #if defined(__APPLE__) && defined(__MACH__) /* Apple OSX and iOS (Darwin) */ #include <TargetConditionals.h> #if TARGET_IPHONE_SIMULATOR == 1 /* iOS in Xcode simulator */ #elif TARGET_OS_IPHONE == 1 /* iOS on iPhone, iPad, etc. */ #elif TARGET_OS_MAC == 1 /* OS X */ #endif #endif Windows with Cygwin #if defined(__CYGWIN__) && !defined(_WIN32) /* Cygwin POSIX under Microsoft Windows. */ #endif And non-POSIX Windows with: #if defined(_WIN64) /* Microsoft Windows (64-bit) */ #elif defined(_WIN32) /* Microsoft Windows (32-bit) */ #endif The full article lists the following symbols, and shows which systems define them and when: _AIX, __APPLE__, __CYGWIN32__, __CYGWIN__, __DragonFly__, __FreeBSD__, __gnu_linux, hpux, __hpux, linux, __linux, __linux__, __MACH__, __MINGW32__, __MINGW64__, __NetBSD__, __OpenBSD__, _POSIX_IPV6, _POSIX_MAPPED_FILES, _POSIX_SEMAPHORES, _POSIX_THREADS, _POSIX_VERSION, sun, __sun, __SunOS, __sun__, __SVR4, __svr4__, TARGET_IPHONE_SIMULATOR, TARGET_OS_EMBEDDED, TARGET_OS_IPHONE, TARGET_OS_MAC, UNIX, unix, __unix, __unix__, WIN32, _WIN32, __WIN32, __WIN32__, WIN64, _WIN64, __WIN64, __WIN64__, WINNT, __WINNT, __WINNT__. A related article (archive.org link) covers detecting compilers and compiler versions. It lists the following symbols: __clang__, __GNUC__, __GNUG__, __HP_aCC, __HP_cc, __IBMCPP__, __IBMC__, __ICC, __INTEL_COMPILER, _MSC_VER, __PGI, __SUNPRO_C, __SUNPRO_CC for detecting compilers, and __clang_major__, __clang_minor__, __clang_patchlevel__, __clang_version__, __GNUC_MINOR__, __GNUC_PATCHLEVEL__, __GNUC__, __GNUG__, __HP_aCC, __HP_cc, __IBMCPP__, __IBMC__, __ICC, __INTEL_COMPILER, __INTEL_COMPILER_BUILD_DATE, _MSC_BUILD, _MSC_FULL_VER, _MSC_VER, __PGIC_MINOR__, __PGIC_PATCHLEVEL__, __PGIC__, __SUNPRO_C, __SUNPRO_CC, __VERSION__, __xlC_ver__, __xlC__, __xlc__ for detecting compiler versions.
It depends what you are trying to do. You can check the compiler if your program wants to make use of some specific functions (from the gcc toolchain for example). You can check for operating system ( _WINDOWS, __unix__ ) if you want to use some OS specific functions (regardless of compiler - for example CreateProcess on Windows and fork on unix). Macros for Visual C Macros for gcc You must check the documentation of each compiler in order to be able to detect the differences when compiling. I remember that the gnu toolchain(gcc) has some functions in the C library (libc) that are not on other toolchains (like Visual C for example). This way if you want to use those functions out of commodity then you must detect that you are using GCC, so the code you must use would be the following: #ifdef __GNUC__ // do my gcc specific stuff #else // ... handle this for other compilers #endif
Don't see why you have to. You might have to remember to specify the definition manually on your compiler's commandline, but that's all. For the record, Visual Studio's definition is _WIN32 (with one underscore) rather than __WIN32. If it's not defined then it's not defined, and it won't matter.
I've rebuild my answer... Damn, editing berserk :P: You don't need to use partical one. And probably for MacOSX, Linux and other Unix-likes you don't need to use any at all. Most popular one is (as far as Google tells the truth) is _WIN32. You never define it "by hand" in your source code. It is defined in one of these ways: as a commandline preprocessor/compiler flag (like g++ -D _WIN32) or it is predefined by compiler itself (most of Windows compilers predefine _WIN32, and sometimes other like WIN32 or _WIN32_ too. -- Then you don't need to worry about defining it at all, compiler does the whole work. And my old answer: You don't 'have to' anything. It's just for multi-platform compatibility. Often version of code for all Unix-likes (including Linux, MacOSX, BSD, Solaris...) and other POSIX platform will be completely the same and there must be some changes for Windows. So people write their code generally for Unix-likes and put some Windows-only (eg. DirectX instructions, Windows-like file paths...) parts between #ifdef _WIN32 and #endif. If you have some parts eg. X-Window-system only, or MacOS-only, you do similar with something like #ifdef X_WINDOW or #ifdef MACOS. Then, you need set a proper preprocessor definition while compiling (with gcc using -D flag, like eg. gcc -D _WIN32). If you don't write any platform-dependent code, then you don't need to care for such a #ifdef, #else, #endif blocks. And most of Windows compilers/preprocessors AFAIK have predefined some symbols like _WIN32 (most popular, as far as google tells the truth), WIN32, _WIN32_, etc. So compiling it on Windows most probably you don't need to make anything else than just compiling.
Sigh - don't rely on compiler anything - specify which platform you are building for in your Makefile. Simply put, anything beginning with _ is implementation dependent and not portable. I tried your method once upon a time, on a very large project, and in between bouncing around between Sun-C++ and GCC we just decided to go with Makefile control rather than trying to deduce what the compilers were going to do.