InetNtop: can't find which header it is using - c++

Anyone knows what header I must include for the use of InetNtop function ?
I tried winsock2.h, ws2tcpip.h and i've include the Ws2_32 librabry. I am using windows 7
This is my error that i get an compile time: InetNtop : function could not be resolved
edit:
char temp[10];
int bytes_recv = Recv(temp, sizeof(temp));
char result[INET_ADDRSTRLEN];
InetNtop(AF_INET, (void*)(&temp[4]), result, sizeof(result));
I am trying to print an IP what is in temp.

Not 100% relevant, but for the linux/unix version of this call (inet_ntop(...)), you need to #include <arpa/inet.h>.

According to its documentation Ws2tcpip.h is its actual header file.
EDIT:
According to the documentation, using this function requires your code to be compiled for Windows Vista or later. Since you are including the necessary header and yet the function is not visible, I surmise the you have not set the proper defines or compiler options to compile your code for a suitable version.
The actual Windows version that you are using is not important - what you are compiling for (i.e. the target version) is.
EDIT 2:
You should add the proper #define directive as described here to indicate which Windows version you are compiling for. E.g.
#include <SdkDdkver.h>
#define NTDDI_VERSION NTDDI_VISTA
#define WINVER _WIN32_WINNT_VISTA
#define _WIN32_WINNT _WIN32_WINNT_VISTA
Some of these defines overlap and may not be needed, but on the rare times that I code for Windows I just use them all to make sure :-)
EDIT 3:
Things are a bit different for MinGW/GCC:
#include <w32api.h>
#define WINVER WindowsVista
#define _WIN32_WINDOWS WindowsVista
#define _WIN32_WINNT WindowsVista
Note: These defines should be placed before including Windows.h or any other header but w32api.h.
EDIT 4:
From the WS2tcpip.h in Visual Studio 2010:
#if (NTDDI_VERSION >= NTDDI_VISTA)
.
.
.
PCSTR
WSAAPI
inet_ntop(
__in INT Family,
__in PVOID pAddr,
__out_ecount(StringBufSize) PSTR pStringBuf,
__in size_t StringBufSize
);
PCWSTR
WSAAPI
InetNtopW(
__in INT Family,
__in PVOID pAddr,
__out_ecount(StringBufSize) PWSTR pStringBuf,
__in size_t StringBufSize
);
#define InetPtonA inet_pton
#define InetNtopA inet_ntop
#ifdef UNICODE
#define InetPton InetPtonW
#define InetNtop InetNtopW
#else
#define InetPton InetPtonA
#define InetNtop InetNtopA
#endif
.
.
.
#endif // (NTDDI_VERSION >= NTDDI_VISTA)
Therefore the critical define in this case is NTDDI_VERSION, as expected for a new API addition.
I cannot find the InetNtop definition in MinGW32/GCC-4.4.2, so it's quite possible that it is not supported in your version either.

Although it exists in Ws2tcpip.h, the default target for most projects is Windows XP, and the function you are trying to use was introduced in Vista, so, you need to configure your project to target Vista instead. There are at least three things you can do here:
In your stdafx.h (if you have one), find the definitions of WINVER and _WIN32_WINNT [by default these are both 0x0501], change them both to 0x0600. Windows 7 is 0x0601. Also, define NTDDI_VERSION to 0x06000000.
If you don't have a stdafx.h header, add these definitions to your project's C/C++ preprocessor settings (WINVER=0x0600,_WIN32_WINNT=0x0600,NTDDI_VERSION=0x06000000).
As a last resort, define these manually before including any headers for that particular source file:
#ifndef WINVER
#define WINVER 0x0600
#endif
#ifndef _WIN32_WINNT
#define _WIN32_WINNT 0x0600
#endif
#ifndef NTDDI_VERSION
#define NTDDI_VERSION 0x06000000
#endif
#include <windows.h>
#include <Ws2tcpip.h>
Also, make sure you have an up-to-date copy of ws2tcpip.h. For example, the copy that comes with Visual Studio 2005 does not have a declaration for InetNtop.

Declare this function explicitly in your code.
const char *inet_ntop(int af, const void *src, char *dst, socklen_t size)
{
struct sockaddr_storage ss;
unsigned long s = size;
ZeroMemory(&ss, sizeof(ss));
ss.ss_family = af;
switch(af) {
case AF_INET:
((struct sockaddr_in *)&ss)->sin_addr = *(struct in_addr *)src;
break;
case AF_INET6:
((struct sockaddr_in6 *)&ss)->sin6_addr = *(struct in6_addr *)src;
break;
default:
return NULL;
}
/* cannot direclty use &size because of strict aliasing rules */
return (WSAAddressToString((struct sockaddr *)&ss, sizeof(ss), NULL, dst, &s) == 0)?
dst : NULL;
}

Looks like Ws2tcpip.h, see msdn.

Related

What does mean by "inactive" preprocessor block in C?

I found some lines of code, those are dimmed in my preprocessor block of source code in C. My compiler, MS Visual Studio, naming it "inactive preprocessor block". What does this mean, will my compile do not consider these lines of code,
and how to make it active block?
An inactive preprocessor block is a block of code that is deactivated because of a preprocessor directive. The simplest example is:
#if 0
//everytyhing here is inactive and will be ignored during compilation
#endif
A more common example would be
#ifdef SOME_VAR
// code
#else
// other code
#endif
In this case either the first or the second block of code will be inactive depending on whether SOME_VAR is defined.
Please check this hypothetical example created to elaborate your question.
#include <iostream>
#include <windef.h>
#define _WIN32
int add(int n1, int n2){return n1 + n2;}
LONGLONG add(LONGLONG n1, LONGLONG n2){return n1 + n2;}
int _tmain(int argc, _TCHAR* argv[])
{
#ifdef _WIN32
int val = add(10, 12);
#else
LONGLONG = add(100L, 120L);//Inactive code
#endif // _WIN32
return 0;
}
You can see as _WIN32 is defined the code in #else pre-processor directive is disabled and would not be compiled. You can undefine _WIN32 to see reverse in action. See the screen shot of MS Visual Studio attached. The line in red is disabled code.
Hope this would help.
The preprocessor is one the earliest stages of a programs translation. It can modify the source of the program before the compilation stage begins. That way you can configure the source to build differently, depending on various constraints.
Uses of preprocessor condition blocks include:
Completely commenting out code:
#if 0
// The code here is never compiled. It's "commented" away
#endif
Provide different implementations based on various constraints, like platfrom
#if defined(WIN32)
//Implement widget with Win32Api
#elif defined(MOTIF)
// Implement widget with Motif framework
#else
#error "Unknown platform"
#endif
Have a macro like assert behave in different ways.
Make sure a useful abstraction is defined appropriately:
#if PLATFORM_A
typedef long int32_t;
#elif PLATFORM_B
typedef int int32_t;

DirectX Definitions Missing?

I've been trying to get DirectX 11 to compile on MinGW. So far the only problem I'm having is that the headers are giving me errors saying that certain DirectX related stuff isn't defined.
So far I linked the libraries with -mwindows, -ld3d11, -d3dx11, and -ld3dx10. All the headers and libraries are in the default folders for the compiler.
I also did this before including the DirectX headers (this is needed for MinGW):
#define __in
#define __out
#define __inout
#define __in_bcount(x)
#define __out_bcount(x)
#define __in_ecount(x)
#define __out_ecount(x)
#define __in_ecount_opt(x)
#define __out_ecount_opt(x)
#define __in_bcount_opt(x)
#define __out_bcount_opt(x)
#define __in_opt
#define __inout_opt
#define __out_opt
#define __out_ecount_part_opt(x,y)
#define __deref_out
#define __deref_out_opt
#define __RPC__deref_out
#include "stdint.h"
typedef uint8_t UINT8;
I'm going to assume I did everything correct, but I get errors such as 'ID3D11DeviceContext' was not declared in this scope and 'pContext' was not declared in this scope. I don't know why it's doing this. Did I miss a step?

how to tell if 32 or 64 bit? [duplicate]

I'm looking for a way to reliably determine whether C++ code is being compiled in 32 vs 64 bit. We've come up with what we think is a reasonable solution using macros, but was curious to know if people could think of cases where this might fail or if there is a better way to do this. Please note we are trying to do this in a cross-platform, multiple compiler environment.
#if ((ULONG_MAX) == (UINT_MAX))
# define IS32BIT
#else
# define IS64BIT
#endif
#ifdef IS64BIT
DoMy64BitOperation()
#else
DoMy32BitOperation()
#endif
Thanks.
Unfortunately there is no cross platform macro which defines 32 / 64 bit across the major compilers. I've found the most effective way to do this is the following.
First I pick my own representation. I prefer ENVIRONMENT64 / ENVIRONMENT32. Then I find out what all of the major compilers use for determining if it's a 64 bit environment or not and use that to set my variables.
// Check windows
#if _WIN32 || _WIN64
#if _WIN64
#define ENVIRONMENT64
#else
#define ENVIRONMENT32
#endif
#endif
// Check GCC
#if __GNUC__
#if __x86_64__ || __ppc64__
#define ENVIRONMENT64
#else
#define ENVIRONMENT32
#endif
#endif
Another easier route is to simply set these variables from the compiler command line.
template<int> void DoMyOperationHelper();
template<> void DoMyOperationHelper<4>()
{
// do 32-bits operations
}
template<> void DoMyOperationHelper<8>()
{
// do 64-bits operations
}
// helper function just to hide clumsy syntax
inline void DoMyOperation() { DoMyOperationHelper<sizeof(size_t)>(); }
int main()
{
// appropriate function will be selected at compile time
DoMyOperation();
return 0;
}
Unfortunately, in a cross platform, cross compiler environment, there is no single reliable method to do this purely at compile time.
Both _WIN32 and _WIN64 can sometimes both be undefined, if the project settings are flawed or corrupted (particularly on Visual Studio 2008 SP1).
A project labelled "Win32" could be set to 64-bit, due to a project configuration error.
On Visual Studio 2008 SP1, sometimes the intellisense does not grey out the correct parts of the code, according to the current #define. This makes it difficult to see exactly which #define is being used at compile time.
Therefore, the only reliable method is to combine 3 simple checks:
1) Compile time setting, and;
2) Runtime check, and;
3) Robust compile time checking.
Simple check 1/3: Compile time setting
Choose any method to set the required #define variable. I suggest the method from #JaredPar:
// Check windows
#if _WIN32 || _WIN64
#if _WIN64
#define ENV64BIT
#else
#define ENV32BIT
#endif
#endif
// Check GCC
#if __GNUC__
#if __x86_64__ || __ppc64__
#define ENV64BIT
#else
#define ENV32BIT
#endif
#endif
Simple check 2/3: Runtime check
In main(), double check to see if sizeof() makes sense:
#if defined(ENV64BIT)
if (sizeof(void*) != 8)
{
wprintf(L"ENV64BIT: Error: pointer should be 8 bytes. Exiting.");
exit(0);
}
wprintf(L"Diagnostics: we are running in 64-bit mode.\n");
#elif defined (ENV32BIT)
if (sizeof(void*) != 4)
{
wprintf(L"ENV32BIT: Error: pointer should be 4 bytes. Exiting.");
exit(0);
}
wprintf(L"Diagnostics: we are running in 32-bit mode.\n");
#else
#error "Must define either ENV32BIT or ENV64BIT".
#endif
Simple check 3/3: Robust compile time checking
The general rule is "every #define must end in a #else which generates an error".
#if defined(ENV64BIT)
// 64-bit code here.
#elif defined (ENV32BIT)
// 32-bit code here.
#else
// INCREASE ROBUSTNESS. ALWAYS THROW AN ERROR ON THE ELSE.
// - What if I made a typo and checked for ENV6BIT instead of ENV64BIT?
// - What if both ENV64BIT and ENV32BIT are not defined?
// - What if project is corrupted, and _WIN64 and _WIN32 are not defined?
// - What if I didn't include the required header file?
// - What if I checked for _WIN32 first instead of second?
// (in Windows, both are defined in 64-bit, so this will break codebase)
// - What if the code has just been ported to a different OS?
// - What if there is an unknown unknown, not mentioned in this list so far?
// I'm only human, and the mistakes above would break the *entire* codebase.
#error "Must define either ENV32BIT or ENV64BIT"
#endif
Update 2017-01-17
Comment from #AI.G:
4 years later (don't know if it was possible before) you can convert
the run-time check to compile-time one using static assert:
static_assert(sizeof(void*) == 4);. Now it's all done at compile time
:)
Appendix A
Incidentially, the rules above can be adapted to make your entire codebase more reliable:
Every if() statement ends in an "else" which generates a warning or error.
Every switch() statement ends in a "default:" which generates a warning or error.
The reason why this works well is that it forces you to think of every single case in advance, and not rely on (sometimes flawed) logic in the "else" part to execute the correct code.
I used this technique (among many others) to write a 30,000 line project that worked flawlessly from the day it was first deployed into production (that was 12 months ago).
You should be able to use the macros defined in stdint.h. In particular INTPTR_MAX is exactly the value you need.
#include <cstdint>
#if INTPTR_MAX == INT32_MAX
#define THIS_IS_32_BIT_ENVIRONMENT
#elif INTPTR_MAX == INT64_MAX
#define THIS_IS_64_BIT_ENVIRONMENT
#else
#error "Environment not 32 or 64-bit."
#endif
Some (all?) versions of Microsoft's compiler don't come with stdint.h. Not sure why, since it's a standard file. Here's a version you can use: http://msinttypes.googlecode.com/svn/trunk/stdint.h
That won't work on Windows for a start. Longs and ints are both 32 bits whether you're compiling for 32 bit or 64 bit windows. I would think checking if the size of a pointer is 8 bytes is probably a more reliable route.
You could do this:
#if __WORDSIZE == 64
char *size = "64bits";
#else
char *size = "32bits";
#endif
Try this:
#ifdef _WIN64
// 64 bit code
#elif _WIN32
// 32 bit code
#else
if(sizeof(void*)==4)
// 32 bit code
else
// 64 bit code
#endif
Below code works fine for most current environments:
#if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
#define IS64BIT 1
#else
#define IS32BIT 1
#endif
"Compiled in 64 bit" is not well defined in C++.
C++ sets only lower limits for sizes such as int, long and void *. There is no guarantee that int is 64 bit even when compiled for a 64 bit platform. The model allows for e.g. 23 bit ints and sizeof(int *) != sizeof(char *)
There are different programming models for 64 bit platforms.
Your best bet is a platform specific test. Your second best, portable decision must be more specific in what is 64 bit.
Your approach was not too far off, but you are only checking whether long and int are of the same size. Theoretically, they could both be 64 bits, in which case your check would fail, assuming both to be 32 bits. Here is a check that actually checks the size of the types themselves, not their relative size:
#if ((UINT_MAX) == 0xffffffffu)
#define INT_IS32BIT
#else
#define INT_IS64BIT
#endif
#if ((ULONG_MAX) == 0xfffffffful)
#define LONG_IS32BIT
#else
#define LONG_IS64BIT
#endif
In principle, you can do this for any type for which you have a system defined macro with the maximal value.
Note, that the standard requires long long to be at least 64 bits even on 32 bit systems.
People already suggested methods that will try to determine if the program is being compiled in 32-bit or 64-bit.
And I want to add that you can use the c++11 feature static_assert to make sure that the architecture is what you think it is ("to relax").
So in the place where you define the macros:
#if ...
# define IS32BIT
static_assert(sizeof(void *) == 4, "Error: The Arch is not what I think it is")
#elif ...
# define IS64BIT
static_assert(sizeof(void *) == 8, "Error: The Arch is not what I think it is")
#else
# error "Cannot determine the Arch"
#endif
Borrowing from Contango's excellent answer above and combining it with "Better Macros, Better Flags" from Fluent C++, you can do:
// Macro for checking bitness (safer macros borrowed from
// https://www.fluentcpp.com/2019/05/28/better-macros-better-flags/)
#define MYPROJ_IS_BITNESS( X ) MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_##X()
// Bitness checks borrowed from https://stackoverflow.com/a/12338526/201787
#if _WIN64 || ( __GNUC__ && __x86_64__ )
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_64() 1
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_32() 0
# define MYPROJ_IF_64_BIT_ELSE( x64, x86 ) (x64)
static_assert( sizeof( void* ) == 8, "Pointer size is unexpected for this bitness" );
#elif _WIN32 || __GNUC__
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_64() 0
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_32() 1
# define MYPROJ_IF_64_BIT_ELSE( x64, x86 ) (x86)
static_assert( sizeof( void* ) == 4, "Pointer size is unexpected for this bitness" );
#else
# error "Unknown bitness!"
#endif
Then you can use it like:
#if MYPROJ_IS_BITNESS( 64 )
DoMy64BitOperation()
#else
DoMy32BitOperation()
#endif
Or using the extra macro I added:
MYPROJ_IF_64_BIT_ELSE( DoMy64BitOperation(), DoMy32BitOperation() );
Here are a few more ways to do what you want in modern C++.
You can create a variable that defines the number of system bits:
static constexpr size_t sysbits = (CHAR_BIT * sizeof(void*));
And then in C++17 you can do something like:
void DoMy64BitOperation() {
std::cout << "64-bit!\n";
}
void DoMy32BitOperation() {
std::cout << "32-bit!\n";
}
inline void DoMySysBitOperation()
{
if constexpr(sysbits == 32)
DoMy32BitOperation();
else if constexpr(sysbits == 64)
DoMy64BitOperation();
/*else - other systems. */
}
Or in C++20:
template<void* = nullptr>
// template<int = 32> // May be clearer, pick whatever you like.
void DoMySysBitOperation()
requires(sysbits == 32)
{
std::cout << "32-bit!\n";
}
template<void* = nullptr>
// template<int = 64>
void DoMySysBitOperation()
requires(sysbits == 64)
{
std::cout << "64-bit!\n";
}
template<void* = nullptr>
void DoMySysBitOperation()
/* requires(sysbits == OtherSystem) */
{
std::cout << "Unknown System!\n";
}
The template<...> is usually not needed, but since those functions will have the same mangling name, we must enforce the compiler to pick the correct ones. Also, template<void* = nullptr> may be confusing ( The other template may be better and more logically correct ), I only used it as a workaround to satisfy the compiler name mangling.
If you can use project configurations in all your environments, that would make defining a 64- and 32-bit symbol easy. So you'd have project configurations like this:
32-bit Debug
32-bit Release
64-bit Debug
64-bit Release
EDIT: These are generic configurations, not targetted configurations. Call them whatever you want.
If you can't do that, I like Jared's idea.
I'd place 32-bit and 64-bit sources in different files and then select appropriate source files using the build system.
I'm adding this answer as a use case and complete example for the runtime-check described in another answer.
This is the approach I've been taking for conveying to the end-user whether the program was compiled as 64-bit or 32-bit (or other, for that matter):
version.h
#ifndef MY_VERSION
#define MY_VERSION
#include <string>
const std::string version = "0.09";
const std::string arch = (std::to_string(sizeof(void*) * 8) + "-bit");
#endif
test.cc
#include <iostream>
#include "version.h"
int main()
{
std::cerr << "My App v" << version << " [" << arch << "]" << std::endl;
}
Compile and Test
g++ -g test.cc
./a.out
My App v0.09 [64-bit]

what is __in and WSAAPI?

I saw in the definition of a socket in msdn the following:
SOCKET WSAAPI socket(
__in int af,
__in int type,
__in int protocol
);
What is the prefix "__in" mean?
and what is WSAAPI ?
__in (and friends) specify the intended use of each parameter, so that calls to that function may be mechanically checked.
See http://msdn.microsoft.com/en-us/library/aa383701(v=vs.85).aspx on how to activate the checking.
http://msdn.microsoft.com/en-us/library/ms235402.aspx describes the modern alternative.
WSAAPI expands to the calling convention used for the socket library functions. This ensures that the code for calls to the functions are generated correctly, even if the calling code is set to build with a different calling convention.
It is a preprocessor macro that is defined as nothing. The purpose is to declare the interface so that the user of the interface knows the purpose of function arguments (in terms of input/output parameters).
WSAAPI is the name for Microsoft's socket API. It is based on Berkeley sockets.
For those looking to find the calling convention so they can call WSAAPI functions from another language, WSAAPI is defined in Winsock2.h as:
#define WSAAPI FAR PASCAL
Then in minwindef.h:
#define FAR far
#define far
#if (!defined(_MAC)) && ((_MSC_VER >= 800) || defined(_STDCALL_SUPPORTED))
#define pascal __stdcall
#else
#define pascal
#endif
#ifdef _MAC
#ifdef _68K_
#define PASCAL __pascal
#else
#define PASCAL
#endif
#elif (_MSC_VER >= 800) || defined(_STDCALL_SUPPORTED)
#define PASCAL __stdcall
#else
#define PASCAL pascal
#endif
An _MSC_VER of 800 is Visual C++ 1.0, which is ancient.
So it looks like if you're writing Mac code and _68K_ is defined, you get the __pascal calling convention. If you're using Visual C++ >= 1.0 and developing for Windows, it's the __stdcall calling convention. Otherwise, it's either __stdcall or nothing, depending on whether _STDCALL_SUPPORTED is defined.
So basically WSAAPI probably evaluates to __stdcall on your machine.

Determining 32 vs 64 bit in C++

I'm looking for a way to reliably determine whether C++ code is being compiled in 32 vs 64 bit. We've come up with what we think is a reasonable solution using macros, but was curious to know if people could think of cases where this might fail or if there is a better way to do this. Please note we are trying to do this in a cross-platform, multiple compiler environment.
#if ((ULONG_MAX) == (UINT_MAX))
# define IS32BIT
#else
# define IS64BIT
#endif
#ifdef IS64BIT
DoMy64BitOperation()
#else
DoMy32BitOperation()
#endif
Thanks.
Unfortunately there is no cross platform macro which defines 32 / 64 bit across the major compilers. I've found the most effective way to do this is the following.
First I pick my own representation. I prefer ENVIRONMENT64 / ENVIRONMENT32. Then I find out what all of the major compilers use for determining if it's a 64 bit environment or not and use that to set my variables.
// Check windows
#if _WIN32 || _WIN64
#if _WIN64
#define ENVIRONMENT64
#else
#define ENVIRONMENT32
#endif
#endif
// Check GCC
#if __GNUC__
#if __x86_64__ || __ppc64__
#define ENVIRONMENT64
#else
#define ENVIRONMENT32
#endif
#endif
Another easier route is to simply set these variables from the compiler command line.
template<int> void DoMyOperationHelper();
template<> void DoMyOperationHelper<4>()
{
// do 32-bits operations
}
template<> void DoMyOperationHelper<8>()
{
// do 64-bits operations
}
// helper function just to hide clumsy syntax
inline void DoMyOperation() { DoMyOperationHelper<sizeof(size_t)>(); }
int main()
{
// appropriate function will be selected at compile time
DoMyOperation();
return 0;
}
Unfortunately, in a cross platform, cross compiler environment, there is no single reliable method to do this purely at compile time.
Both _WIN32 and _WIN64 can sometimes both be undefined, if the project settings are flawed or corrupted (particularly on Visual Studio 2008 SP1).
A project labelled "Win32" could be set to 64-bit, due to a project configuration error.
On Visual Studio 2008 SP1, sometimes the intellisense does not grey out the correct parts of the code, according to the current #define. This makes it difficult to see exactly which #define is being used at compile time.
Therefore, the only reliable method is to combine 3 simple checks:
1) Compile time setting, and;
2) Runtime check, and;
3) Robust compile time checking.
Simple check 1/3: Compile time setting
Choose any method to set the required #define variable. I suggest the method from #JaredPar:
// Check windows
#if _WIN32 || _WIN64
#if _WIN64
#define ENV64BIT
#else
#define ENV32BIT
#endif
#endif
// Check GCC
#if __GNUC__
#if __x86_64__ || __ppc64__
#define ENV64BIT
#else
#define ENV32BIT
#endif
#endif
Simple check 2/3: Runtime check
In main(), double check to see if sizeof() makes sense:
#if defined(ENV64BIT)
if (sizeof(void*) != 8)
{
wprintf(L"ENV64BIT: Error: pointer should be 8 bytes. Exiting.");
exit(0);
}
wprintf(L"Diagnostics: we are running in 64-bit mode.\n");
#elif defined (ENV32BIT)
if (sizeof(void*) != 4)
{
wprintf(L"ENV32BIT: Error: pointer should be 4 bytes. Exiting.");
exit(0);
}
wprintf(L"Diagnostics: we are running in 32-bit mode.\n");
#else
#error "Must define either ENV32BIT or ENV64BIT".
#endif
Simple check 3/3: Robust compile time checking
The general rule is "every #define must end in a #else which generates an error".
#if defined(ENV64BIT)
// 64-bit code here.
#elif defined (ENV32BIT)
// 32-bit code here.
#else
// INCREASE ROBUSTNESS. ALWAYS THROW AN ERROR ON THE ELSE.
// - What if I made a typo and checked for ENV6BIT instead of ENV64BIT?
// - What if both ENV64BIT and ENV32BIT are not defined?
// - What if project is corrupted, and _WIN64 and _WIN32 are not defined?
// - What if I didn't include the required header file?
// - What if I checked for _WIN32 first instead of second?
// (in Windows, both are defined in 64-bit, so this will break codebase)
// - What if the code has just been ported to a different OS?
// - What if there is an unknown unknown, not mentioned in this list so far?
// I'm only human, and the mistakes above would break the *entire* codebase.
#error "Must define either ENV32BIT or ENV64BIT"
#endif
Update 2017-01-17
Comment from #AI.G:
4 years later (don't know if it was possible before) you can convert
the run-time check to compile-time one using static assert:
static_assert(sizeof(void*) == 4);. Now it's all done at compile time
:)
Appendix A
Incidentially, the rules above can be adapted to make your entire codebase more reliable:
Every if() statement ends in an "else" which generates a warning or error.
Every switch() statement ends in a "default:" which generates a warning or error.
The reason why this works well is that it forces you to think of every single case in advance, and not rely on (sometimes flawed) logic in the "else" part to execute the correct code.
I used this technique (among many others) to write a 30,000 line project that worked flawlessly from the day it was first deployed into production (that was 12 months ago).
You should be able to use the macros defined in stdint.h. In particular INTPTR_MAX is exactly the value you need.
#include <cstdint>
#if INTPTR_MAX == INT32_MAX
#define THIS_IS_32_BIT_ENVIRONMENT
#elif INTPTR_MAX == INT64_MAX
#define THIS_IS_64_BIT_ENVIRONMENT
#else
#error "Environment not 32 or 64-bit."
#endif
Some (all?) versions of Microsoft's compiler don't come with stdint.h. Not sure why, since it's a standard file. Here's a version you can use: http://msinttypes.googlecode.com/svn/trunk/stdint.h
That won't work on Windows for a start. Longs and ints are both 32 bits whether you're compiling for 32 bit or 64 bit windows. I would think checking if the size of a pointer is 8 bytes is probably a more reliable route.
You could do this:
#if __WORDSIZE == 64
char *size = "64bits";
#else
char *size = "32bits";
#endif
Try this:
#ifdef _WIN64
// 64 bit code
#elif _WIN32
// 32 bit code
#else
if(sizeof(void*)==4)
// 32 bit code
else
// 64 bit code
#endif
Below code works fine for most current environments:
#if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
#define IS64BIT 1
#else
#define IS32BIT 1
#endif
"Compiled in 64 bit" is not well defined in C++.
C++ sets only lower limits for sizes such as int, long and void *. There is no guarantee that int is 64 bit even when compiled for a 64 bit platform. The model allows for e.g. 23 bit ints and sizeof(int *) != sizeof(char *)
There are different programming models for 64 bit platforms.
Your best bet is a platform specific test. Your second best, portable decision must be more specific in what is 64 bit.
Your approach was not too far off, but you are only checking whether long and int are of the same size. Theoretically, they could both be 64 bits, in which case your check would fail, assuming both to be 32 bits. Here is a check that actually checks the size of the types themselves, not their relative size:
#if ((UINT_MAX) == 0xffffffffu)
#define INT_IS32BIT
#else
#define INT_IS64BIT
#endif
#if ((ULONG_MAX) == 0xfffffffful)
#define LONG_IS32BIT
#else
#define LONG_IS64BIT
#endif
In principle, you can do this for any type for which you have a system defined macro with the maximal value.
Note, that the standard requires long long to be at least 64 bits even on 32 bit systems.
People already suggested methods that will try to determine if the program is being compiled in 32-bit or 64-bit.
And I want to add that you can use the c++11 feature static_assert to make sure that the architecture is what you think it is ("to relax").
So in the place where you define the macros:
#if ...
# define IS32BIT
static_assert(sizeof(void *) == 4, "Error: The Arch is not what I think it is")
#elif ...
# define IS64BIT
static_assert(sizeof(void *) == 8, "Error: The Arch is not what I think it is")
#else
# error "Cannot determine the Arch"
#endif
Borrowing from Contango's excellent answer above and combining it with "Better Macros, Better Flags" from Fluent C++, you can do:
// Macro for checking bitness (safer macros borrowed from
// https://www.fluentcpp.com/2019/05/28/better-macros-better-flags/)
#define MYPROJ_IS_BITNESS( X ) MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_##X()
// Bitness checks borrowed from https://stackoverflow.com/a/12338526/201787
#if _WIN64 || ( __GNUC__ && __x86_64__ )
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_64() 1
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_32() 0
# define MYPROJ_IF_64_BIT_ELSE( x64, x86 ) (x64)
static_assert( sizeof( void* ) == 8, "Pointer size is unexpected for this bitness" );
#elif _WIN32 || __GNUC__
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_64() 0
# define MYPROJ_IS_BITNESS_PRIVATE_DEFINITION_32() 1
# define MYPROJ_IF_64_BIT_ELSE( x64, x86 ) (x86)
static_assert( sizeof( void* ) == 4, "Pointer size is unexpected for this bitness" );
#else
# error "Unknown bitness!"
#endif
Then you can use it like:
#if MYPROJ_IS_BITNESS( 64 )
DoMy64BitOperation()
#else
DoMy32BitOperation()
#endif
Or using the extra macro I added:
MYPROJ_IF_64_BIT_ELSE( DoMy64BitOperation(), DoMy32BitOperation() );
Here are a few more ways to do what you want in modern C++.
You can create a variable that defines the number of system bits:
static constexpr size_t sysbits = (CHAR_BIT * sizeof(void*));
And then in C++17 you can do something like:
void DoMy64BitOperation() {
std::cout << "64-bit!\n";
}
void DoMy32BitOperation() {
std::cout << "32-bit!\n";
}
inline void DoMySysBitOperation()
{
if constexpr(sysbits == 32)
DoMy32BitOperation();
else if constexpr(sysbits == 64)
DoMy64BitOperation();
/*else - other systems. */
}
Or in C++20:
template<void* = nullptr>
// template<int = 32> // May be clearer, pick whatever you like.
void DoMySysBitOperation()
requires(sysbits == 32)
{
std::cout << "32-bit!\n";
}
template<void* = nullptr>
// template<int = 64>
void DoMySysBitOperation()
requires(sysbits == 64)
{
std::cout << "64-bit!\n";
}
template<void* = nullptr>
void DoMySysBitOperation()
/* requires(sysbits == OtherSystem) */
{
std::cout << "Unknown System!\n";
}
The template<...> is usually not needed, but since those functions will have the same mangling name, we must enforce the compiler to pick the correct ones. Also, template<void* = nullptr> may be confusing ( The other template may be better and more logically correct ), I only used it as a workaround to satisfy the compiler name mangling.
If you can use project configurations in all your environments, that would make defining a 64- and 32-bit symbol easy. So you'd have project configurations like this:
32-bit Debug
32-bit Release
64-bit Debug
64-bit Release
EDIT: These are generic configurations, not targetted configurations. Call them whatever you want.
If you can't do that, I like Jared's idea.
I'd place 32-bit and 64-bit sources in different files and then select appropriate source files using the build system.
I'm adding this answer as a use case and complete example for the runtime-check described in another answer.
This is the approach I've been taking for conveying to the end-user whether the program was compiled as 64-bit or 32-bit (or other, for that matter):
version.h
#ifndef MY_VERSION
#define MY_VERSION
#include <string>
const std::string version = "0.09";
const std::string arch = (std::to_string(sizeof(void*) * 8) + "-bit");
#endif
test.cc
#include <iostream>
#include "version.h"
int main()
{
std::cerr << "My App v" << version << " [" << arch << "]" << std::endl;
}
Compile and Test
g++ -g test.cc
./a.out
My App v0.09 [64-bit]