I'm using the same code, using ODBC, on both Windows and Linux (linuxODBC).
On windows moving from SQLWCHAR and std::wstring is natural. bot on Linux I must convert encoding from std::wstring (UTF-32) to SQLWCHAR (UTF-16) and vice versa, and store results in std::u16string.
Whenever I give SQLWCHAR to char16_t and vice versa, I get a compilation error, which I can easily resolve with casting.
using macros like this:
#ifdef WIN32
#define SQL(X) (X)
#define CHAR16(X) (X)
#else
#define SQL(X) ((const SQLWCHAR*)X)
#define CHAR16(X) ((const char16_t*)X)
#endif
both appear to have the same size, but see this in cppreference:
int_least8_t int_least16_t int_least32_t int_least64_t smallest
signed integer type with width of at least 8, 16, 32 and 64 bits
respectively (typedef)
the "at least" made me start to be worried...
So I would like to ask, is this casting safe?
Related
I have some c++ code that I need to integrate into an iOS app. The windows c++ code handles unicode using tchar.h. I have made the following defines for iOS:
#include <wchar.h>
#define _T(x) x
#define TCHAR char
#define _tremove unlink
#define _stprintf sprintf
#define _sntprintf vsnprintf
#define _tcsncpy wcsncpy
#define _tcscpy wcscpy
#define _tcscmp wcscmp
#define _tcsrchr wcsrchr
#define _tfopen fopen
When trying to build the app many of these are either missing (ex. wcscpy) or have the wrong arguments. The coder responsible for the c++ code said I should use char instead of wchar so I defined TCHAR as char. Does anyone have a clue as to how this should be done?
The purpose of TCHAR (and FYI, it is _TCHAR in the C runtime, TCHAR is for the Win32 API) is to allow code to switch between either char or wchar_t APIs at compile time, but your defines are mixing them together. The wcs functions are for wchar_t, so you need to change those defines to the char counterparts, to match your other char-based defines:
#define _tcsncpy strncpy
#define _tcscpy strcpy
#define _tcscmp strcmp
#define _tcsrchr strrchr
Also, you are mapping _sntprintf to the wrong C function. It needs to be mapped to snprintf() instead of vsnprintf():
#define _sntprintf snprintf
snprintf() and vsnprintf() are declared very differently:
int snprintf ( char * s, size_t n, const char * format, ... );
int vsnprintf (char * s, size_t n, const char * format, va_list arg );
Which is likely why you are getting "wrong arguments" errors.
I already used boost::format in many case but I found one for which the windows implementation doesn't react as I expected because it throw an exception
boost::bad_format_string: format-string is ill-formed
I use macro to define hexa number output format for different platform:
#if (defined(WIN32) || defined(WIN64))
#define FORMATUI64X_09 "%09I64X"
#define FORMATUI64X_016 "%016I64X"
#else
#if defined __x86_64__
#define FORMATUI64X_09 "%09lX"
#define FORMATUI64X_016 "%016lX"
#else
#define FORMATUI64X_09 "%09llX"
#define FORMATUI64X_016 "%016llX"
#endif
#endif
and call format like below:
string msg = (boost::format("0x"FORMATUI64X_016"(hex) \t %i \t %d \t %s \t %i\t ") % an uint64_t % an int % an uint % a char* % an uint).str();
Remark that I use a syntax working perfectly with a 'fprintf'.
I suppose that it come from the 'uint64_t' format as an hexa, but do you know to write the same line in a way it will work for all platform ?
I64X is not a valid format specification for boost::format (it's Microsoft-specific). The format specification types are not platform-specific. Boost does not use the [sf]printf routines provided by your implementation's runtime, which is why the fact that it works with Visual Studio's fprintf doesn't really affect what boost::format supports. You should be using either %lX or %llX, as your non-Windows clause is doing.
I would probably just use %llX everywhere and cast output variables to long long, e.g.:
static_assert(sizeof(unsigned long long) >= sizeof(uint64_t), "long long must be >= 64 bits");
auto s = (boost::format("0x%016llx") % static_cast<unsigned long long>(u64)).str();
This should work anywhere that unsigned long long is sufficient to represent uint64_t, and you can add a static assertion (as shown) to ensure that.
I'm trying to compile SkyFireEMU (https://github.com/ProjectSkyfire/SkyFireEMU) with Visual Studio 2010 (32 BIT) but I get an error (On almost all files of "worldserver"):
fatal error C1189: #error : sizeof(void *) is neither sizeof(int) nor sizeof(long) nor sizeof(long long)
This rederects me to this peace of code:
#if SIZEOF_CHARP == SIZEOF_INT
typedef int intptr;
#elif SIZEOF_CHARP == SIZEOF_LONG
typedef long intptr;
#elif SIZEOF_CHARP == SIZEOF_LONG_LONG
typedef long long intptr;
#else
#error sizeof(void *) is neither sizeof(int) nor sizeof(long) nor sizeof(long long)
#endif
Can someone help me with fixing this problem? What does the error mean? I really don't know what goes wrong.
The code is old. Today you can use typedef intptr_t intptr (aka std::intptr_t in <cstdint>).
SIZEOF_CHARP is not being set appropriately (according to whoever wrote that code), hence invoking the error message. Your best bet is to consult the SkyFireEMU documentation, you may need to set this flag before compiling, or something along these lines.
Having said this, I did a quick Google and found this, which describes an identical error message. It suggests writing the following right before the block you provide:
#ifndef SIZEOF_CHARP
#define SIZEOF_CHARP SIZEOF_LONG
#endif
There may still be an underlying problem though, since this really only suppresses the error.
Right now I'm working on a project that extensively uses 64bit unsigned integers in many parts of the code. So far we have only been compiling with gcc 4.6 but we are now porting some code to windows. It's crucial that these unsigned ints are 64bits wide. It has been suggested that we could use long long but it's not good if long long happens to be bigger than 64bits, we actually want to have a guarantee that it will be 64 bits and writing something like static_assert(sizeof(long long) == 8) seems to be a bit of a code smell.
What is the best way to define something like uint64 that will compile across both gcc and msvc without needing to have different code syntax used everywhere?
What about including cstdint and using std::uint64_t?
You can use boost:
The typedef int#_t, with # replaced by the width, designates a signed
integer type of exactly # bits; for example int8_t denotes an 8-bit
signed integer type. Similarly, the typedef uint#_t designates an
unsigned integer type of exactly # bits.
See: http://www.boost.org/doc/libs/1_48_0/libs/integer/doc/html/boost_integer/cstdint.html
Especially this header:
http://www.boost.org/doc/libs/1_48_0/boost/cstdint.hpp
This is what I do:
#ifndef u64
#ifdef WIN32
typedef unsigned __int64 u64;
#else // !WIN32
typedef unsigned long long u64;
#endif
#endif
On Windows you can use __int64, unsigned __int64, or typedefs: UINT64, INT64 etc.
Look at this
But yes, if code portability is concern, use standard typedefs, as suggested by others.
I have UNICODE application where in we use _T(x) which is defined as follows.
#if defined(_UNICODE)
#define _T(x) L ##x
#else
#define _T(x) x
#endif
I understand that L gets defined to wchar_t, which will be 4 bytes on any platform. Please correct me if I am wrong. My requirement is that I need L to be 2 bytes. So as compiler hack I started using -fshort-wchar gcc flag. But now I need my application to be moved to zSeries where I don't get to see the effect of -fshort-wchar flag in that platform.
In order for me to be able to port my application on zSeries, I need to modify _T( ) macro in such a way that even after using L ##x and without using -fshort-wchar flag, I need to get 2byte wide character data.Can some one tell me how I can change the definition of L so that I can define L to be 2 bytes always in my application.
You can't - not without c++0x support. c++0x defines the following ways of declaring string literals:
"string of char characters in some implementation defined encoding" - char
u8"String of utf8 chars" - char
u"string of utf16 chars" - char16_t
U"string of utf32 chars" - char32_t
L"string of wchar_t in some implementation defined encoding" - wchar_t
Until c++0x is widely supported, the only way to encode a utf-16 string in a cross platform way is to break it up into bits:
// make a char16_t type to stand in until msvc/gcc/etc supports
// c++0x utf string literals
#ifndef CHAR16_T_DEFINED
#define CHAR16_T_DEFINED
typedef unsigned short char16_t;
#endif
const char16_t strABC[] = { 'a', 'b', 'c', '\0' };
// the same declaration would work for a type that changes from 8 to 16 bits:
#ifdef _UNICODE
typedef char16_t TCHAR;
#else
typedef char TCHAR;
#endif
const TCHAR strABC2[] = { 'a', 'b', 'b', '\0' };
The _T macro can only deliver the goods on platforms where wchar_t's are 16bits wide. And, the alternative is still not truly cross-platform: The coding of char and wchar_t is implementation defined so 'a' does not necessarily encode the unicode codepoint for 'a' (0x61). Thus, to be strictly accurate, this is the only way of writing the string:
const TCHAR strABC[] = { '\x61', '\x62', '\x63', '\0' };
Which is just horrible.
Ah! The wonders of portability :-)
If you have a C99 compiler for all your platforms, use int_least16_t, uint_least16_t, ... from <stdint.h>. Most platforms also define int16_t but it's not required to exist (if the platform is capable of using exactly 16 bits at a time, the typedef int16_t must be defined).
Now wrap all the strings in arrays of uint_least16_t and make sure your code does not expect values of uint_least16_t to wrap at 65535 ...