C++ Binary Writing/Reading on 32bit to/from 64bit - c++

If you have a binary output stream, and write integers to a file on a 32-bit Windows computer. Would you then be able to read the same integers from that same file on a 64-bit Windows computer?
My guess would be no. Since an integer on a 32-bit computer is 4 bytes, where an integer on a 64-bit computer is 8 bytes.
So does the following code work, while the files have to be able to be read and written from and by both 64-bit and 32-bit computers, no matter the OS, computer architecture and data type. If not how would one be able to do that, while the files have to be in binary form.
Writing
std::ofstream ofs("example.bin", std::ios::binary);
int i = 128;
ofs.write((char*) (&i), sizeof(i));
ofs.close();
Reading
std::ifstream ifs("example.bin", std::ios::binary);
int i = 0;
ifs.read((char*) (&i), sizeof(i));
ifs.close();

While int is 4 bytes on almost all modern platforms (32bit and 64bit), there is no guarantee for its size. So for serializing data into a file or other binary streams, you should prefer fixed width integer types from the header <cstdint> which were introduced in C++11 (some compilers support it in C++03):
#include <cstdint>
...
int32_t i = 128;
ofs.write((char*)(&i), sizeof(i));
...
Another option is to enforce a certain type to have a certain size, e.g. int to have size 4. To make sure your program won't compile if this was not true, use static_assert:
...
int i = 128;
static_assert(sizeof(i) == 4, "Field i has to have size 4.");
ofs.write((char*)(&i), sizeof(i));
...
While this sounds stupid considering we have fixed width integers as above, this might be useful if you want to store a whole struct of which you made assumptions in a certain version of some library. Example: vec4 from glm is documented to contain four floats, so when serializing this struct, it's good to check this statically in order to catch future library changes (unlikely but possible).
Another very important thing to consider however is the endianess of integral types, which varies among platforms. Most compilers for modern x86 desktop platforms use little endian for integral types, so I'd prefer this for your binary file format; but if the platform uses big endian you need to convert it (reverse the byte order).

There's no guarantee for the size of an int in C++. All you know is that it will be at least as big as a short int and no larger than a long int. The compiler is free to choose an appropriate size within these constraints. While most will choose 32-bits as the size of an int, some won't.
If you know your type is always 32-bits then you can use the int32_t type.
include <stdint.h>
to get this type.

Related

Converting a uint8_t to its binary representation

I have a variable of type uint8_t which I'd like to serialize and write to a file (which should be quite portable, at least for Windows, which is what I'm aiming at).
Trying to write it to a file in its binary form, I came accross this working snippet:
uint8_t m_num = 3;
unsigned int s = (unsigned int)(m_num & 0xFF);
file.write((wchar_t*)&s, 1); // file = std::wofstream
First, let me make sure I understand what this snippet does - it takes my var (which is basically an unsigned char, 1 byte long), converts it into an unsigned int (which is 4 bytes long, and not so portable), and using & 0xFF "extracts" only the least significant byte.
Now, there are two things I don't understand:
Why convert it into unsigned int in the first place, why can't I simply do something like
file.write((wchar_t*)&m_num, 1); or reinterpret_cast<wchar_t *>(&m_num)? (Ref)
How would I serialize a longer type, say a uint64_t (which is 8 bytes long)? unsigned int may or may not be enough here.
uint8_t is 1 byte, same as char
wchar_t is 2 bytes in Windows, 4 bytes in Linux. It is also depends on endianness. You should avoid wchar_t if portability is a concern.
You can just use std::ofstream. Windows has an additional version for std::ofstream which accepts UTF16 file name. This way your code is compatible with Windows UTF16 filenames and you can still use std::fstream. For example
int i = 123;
std::ofstream file(L"filename_in_unicode.bin", std::ios::binary);
file.write((char*)&i, sizeof(i)); //sizeof(int) is 4
file.close();
...
std::ifstream fin(L"filename_in_unicode.bin", std::ios::binary);
fin.read((char*)&i, 4); // output: i = 123
This is relatively simple because it's only storing integers. This will work on different Windows systems, because Windows is always little-endian, and int size is always 4.
But some systems are big-endian, you would have to deal with that separately.
If you use standard I/O, for example fout << 123456 then integer will be stored as text "123456". Standard I/O is compatible, but it takes a little more disk space and can be a little slower.
It's compatibility versus performance. If you have large amounts of data (several mega bytes or more) and you can deal with compatibility issues in future, then go ahead with writing bytes. Otherwise it's easier to use standard I/O. The performance difference is usually not measurable.
It is impossible to write unit8_t values to a wofstream because a wofstream only writes wide characters and doesn't handle binary values at all.
If what you want to do is to write a wide character representing a code point between 0 and 255, then your code is correct.
If you want to write binary data to a file then your nearest equivalent is ofstream, which will allow you to write bytes.
To answer your questions:
wofstream::write writes wide characters, not bytes. If you reinterpret the address of m_num as the address of a wide character, you will be writing a 16-bit or 32-bit (depending on platform) wide character of which the first byte (that is, the least significant or most significant, depending on platform) is the value of m_num and the remaining bytes are whatever happens to occur in memory after m_num. Depending on the character encoding of the wide characters, this may not even be a valid character. Even if valid, it is largely nonsense. (There are other possible problems if wofstream::write expects a wide-character-aligned rather than a byte-aligned input, or if m_num is immediately followed by unreadable memory).
If you use wofstream then this is a mess, and I shan't address it. If you switch to a byte-oriented ofstream then you have two choices. 1. If you will only ever be reading the file on the same system, file.write(&myint64value,sizeof(myint64value)) will work. The sequence in which the bytes of the 64-bit value are written will be undefined, but the same sequence will be used when you read back, so this doesn't matter. Don't try do something analogous with wofstream because it's dangerous! 2. Extract each of the 8 bytes of myint64value separately (shift right by a multiple of 8 bits and then take the bottom 8 bits) and then write it. This is fully portable because you control the order in which the bytes are written.

Portability for Binary File in C++?

I have a doubt regarding binary I/O for portability of the binary file.
Lets say the PC running my software uses 8 bytes for storing double variable.
The binary file generated will have 8 bytes for a double variable.
Now say the file is being opened in a PC which uses 6 bytes for a double variable (just assuming).
Then the application will read only 6 bytes from the file and store it in the double variable in memory.
Not only does this result in underflow/overflow of data but also the data read after the double will definitely be incorrect because of the 2 byte offset created due to under reading.
I want to support my application for not only 32/64 bit, but also Windows, Ubuntu PC's.
So how do you make sure that the data read from the same file in any PC would be the same?
In general, you should wrap data to be stored in binaries in your own data structures and implement platform independent read/write operations for those data structures - basically, size of binary data structure written to disk should be same for all platforms (max possible size of elementary data over all supported platforms).
When writing data on platform with smaller data size, data should be padded with extra 0 bytes to ensure size of recorded data stays same.
When reading, whole data can be read in fixed data blocks of known size, and conversion should be performed depending on platform it was written/it is being read on. This should take care of endianess too. You may want to include some header indicating sizes of data to distinguish between files recorded on different platforms when reading them.
this would give truly platform independent serialization for binary file.
Example for doubles
class CustomDouble
{
public:
double val;
static const int DISK_SIZE;
void toFile(std::ofstream &file)
{
int bytesWritten(0);
file.write(reinterpret_cast<const char*>(&val),sizeof(val));
bytesWritten+=sizeof(val);
while(bytesWritten<CustomDouble::DISK_SIZE)
{
char byte(0);
file.write(&byte,sizeof(byte));
bytesWritten+=sizeof(byte);
}
}
};
const int CustomDouble::DISK_SIZE = 8;
This ensures you always write 8 bytes regarding of size of double on your platform. When you read the file, you always read those 8 bytes still as binary, and do conversions if necessary depending whioch platform it was written on/ is being read on (you will probably add some small header to the file to identify platform it was recorded on)
While custom conversion does add some overhead, it is way less then those of storing values as text, and normally you will only perform conversions for incompatible platforms, while for same platform there will be no overhead.
cstdint includes type definitions that are a fixed size, so int32_t will always be 4 bytes long. You can use these in place of regular types when the size of the type is important to you.
Use Google Protocol Buffers or any other cross-platform serialization library. You can also roll out your own solution, based on fact, that char is guaranteed to be 1 byte (i.e. serialize anything into char arrays).

fixed length data types in C/C++

I've heard that size of data types such as int may vary across platforms.
My first question is: can someone bring some example, what goes wrong, when program
assumes an int is 4 bytes, but on a different platform it is say 2 bytes?
Another question I had is related. I know people solve this issue with some typedefs,
like you have variables like u8,u16,u32 - which are guaranteed to be 8bits, 16bits, 32bits, regardless of the platform -- my question is, how is this achieved usually? (I am not referring to types from stdint library - I am curious manually, how can one enforce that some type is always say 32 bits regardless of the platform??)
I know people solve this issue with some typedefs, like you have variables like u8,u16,u32 - which are guaranteed to be 8bits, 16bits, 32bits, regardless of the platform
There are some platforms, which have no types of certain size (like for example TI's 28xxx, where size of char is 16 bits). In such cases, it is not possible to have an 8-bit type (unless you really want it, but that may introduce performance hit).
how is this achieved usually?
Usually with typedefs. c99 (and c++11) have these typedefs in a header. So, just use them.
can someone bring some example, what goes wrong, when program assumes an int is 4 bytes, but on a different platform it is say 2 bytes?
The best example is a communication between systems with different type size. Sending array of ints from one to another platform, where sizeof(int) is different on two, one has to take extreme care.
Also, saving array of ints in a binary file on 32-bit platform, and reinterpreting it on a 64-bit platform.
In earlier iterations of the C standard, you generally made your own typedef statements to ensure you got a (for example) 16-bit type, based on #define strings passed into the compiler for example:
gcc -DINT16_IS_LONG ...
Nowadays (C99 and above), there are specific types such as uint16_t, the exactly 16-bit wide unsigned integer.
Provided you include stdint.h, you get exact bit width types,at-least-that-width types, fastest types with a given minimum widthand so on, as documented in C99 7.18 Integer types <stdint.h>. If an implementation has compatible types, they are required to provide these.
Also very useful is inttypes.h which adds some other neat features for format conversion of these new types (printf and scanf format strings).
For the first question: Integer Overflow.
For the second question: for example, to typedef an unsigned 32 bits integer, on a platform where int is 4 bytes, use:
typedef unsigned int u32;
On a platform where int is 2 bytes while long is 4 bytes:
typedef unsigned long u32;
In this way, you only need to modify one header file to make the types cross-platform.
If there are some platform-specific macros, this can be achieved without modifying manually:
#if defined(PLAT1)
typedef unsigned int u32;
#elif defined(PLAT2)
typedef unsigned long u32;
#endif
If C99 stdint.h is supported, it's preferred.
First of all: Never write programs that rely on the width of types like short, int, unsigned int,....
Basically: "never rely on the width, if it isn't guaranteed by the standard".
If you want to be truly platform independent and store e.g. the value 33000 as a signed integer, you can't just assume that an int will hold it. An int has at least the range -32767 to 32767 or -32768 to 32767 (depending on ones/twos complement). That's just not enough, even though it usually is 32bits and therefore capable of storing 33000. For this value you definitively need a >16bit type, hence you simply choose int32_t or int64_t. If this type doesn't exist, the compiler will tell you the error, but it won't be a silent mistake.
Second: C++11 provides a standard header for fixed width integer types. None of these are guaranteed to exist on your platform, but when they exists, they are guaranteed to be of the exact width. See this article on cppreference.com for a reference. The types are named in the format int[n]_t and uint[n]_t where n is 8, 16, 32 or 64. You'll need to include the header <cstdint>. The C header is of course <stdint.h>.
usually, the issue happens when you max out the number or when you're serializing. A less common scenario happens when someone makes an explicit size assumption.
In the first scenario:
int x = 32000;
int y = 32000;
int z = x+y; // can cause overflow for 2 bytes, but not 4
In the second scenario,
struct header {
int magic;
int w;
int h;
};
then one goes to fwrite:
header h;
// fill in h
fwrite(&h, sizeof(h), 1, fp);
// this is all fine and good until one freads from an architecture with a different int size
In the third scenario:
int* x = new int[100];
char* buff = (char*)x;
// now try to change the 3rd element of x via buff assuming int size of 2
*((int*)(buff+2*2)) = 100;
// (of course, it's easy to fix this with sizeof(int))
If you're using a relatively new compiler, I would use uint8_t, int8_t, etc. in order to be assure of the type size.
In older compilers, typedef is usually defined on a per platform basis. For example, one may do:
#ifdef _WIN32
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
// and so on...
#endif
In this way, there would be a header per platform that defines specifics of that platform.
I am curious manually, how can one enforce that some type is always say 32 bits regardless of the platform??
If you want your (modern) C++ program's compilation to fail if a given type is not the width you expect, add a static_assert somewhere. I'd add this around where the assumptions about the type's width are being made.
static_assert(sizeof(int) == 4, "Expected int to be four chars wide but it was not.");
chars on most commonly used platforms are 8 bits large, but not all platforms work this way.
Well, first example - something like this:
int a = 45000; // both a and b
int b = 40000; // does not fit in 2 bytes.
int c = a + b; // overflows on 16bits, but not on 32bits
If you look into cstdint header, you will find how all fixed size types (int8_t, uint8_t, etc.) are defined - and only thing differs between different architectures is this header file. So, on one architecture int16_tcould be:
typedef int int16_t;
and on another:
typedef short int16_t;
Also, there are other types, which may be useful, like: int_least16_t
If a type is smaller than you think then it may not be able to store a value you need to store in it.
To create a fixed size types you read the documentation for platforms to be supported and then define typedefs based on #ifdef for the specific platforms.
can someone bring some example, what goes wrong, when program assumes an int is 4 bytes, but on a different platform it is say 2 bytes?
Say you've designed your program to read 100,000 inputs, and you're counting it using an unsigned int assuming a size of 32 bits (32-bit unsigned ints can count till 4,294,967,295). If you compile the code on a platform (or compiler) with 16-bit integers (16-bit unsigned ints can count only till 65,535) the value will wrap-around past 65535 due to the capacity and denote a wrong count.
Compilers are responsible to obey the standard. When you include <cstdint> or <stdint.h> they shall provide types according to standard size.
Compilers know they're compiling the code for what platform, then they can generate some internal macros or magics to build the suitable type. For example, a compiler on a 32-bit machine generates __32BIT__ macro, and previously it has these lines in the stdint header file:
#ifdef __32BIT__
typedef __int32_internal__ int32_t;
typedef __int64_internal__ int64_t;
...
#endif
and you can use it.
bit flags are the trivial example. 0x10000 will cause you problems, you can't mask with it or check if a bit is set in that 17th position if everything is being truncated or smashed to fit into 16-bits.

c++: working with bytes

My problem is, that I need to load a binary file and work with single bits from the file. After that I need to save it out as bytes of course.
My main problem is - what datatype to choose to work in - char or long int? Can I somehow work with chars?
Unless performance is mission-critical here, use whatever makes your code easiest to understand and maintain.
Before beginning to code any thing make sure you understand endianess, c++ type sizes, and how strange they might be.
The unsigned char is the only type that is a fixed size (natural byte of the machine, normally 8 bits). So if you design for portability that is a safe bet. But it isn't hard to just use the unsigned int or even a long long to speed up the process and use size_of to find out how many bits you are getting in each read, although the code gets more complex that way.
You should know that for true portability none of the internal types of c++ is fixed. An unsigned char might have 9 bits, and the int might be as small as in the range of 0 to 65535, as noted in this and this answer
Another alternative, as user1200129 suggests, is to use the boost integer library to reduce all these uncertainties. This is if you have boost available on your platform. Although if going for external libraries there are many serializing libraries to choose from.
But first and foremost before even start optimizing, make something simple that work. Then you can start profiling when you start experiencing timing issues.
It really just depends on what you are wanting to do, but I would say in general, the best speed will be to stick with the size of integers that your program is compiled in. So if you have a 32 bit program, then choose 32 bit integers, and if you have 64 bit, choose 64 bit.
This could be different if there are some bytes in your file, or if there are integers. Without knowing the exact structure of your file, it's difficult to determine what the optimal value is.
Your sentences are not really correct English, but as far as I can interpret the question you can beter use unsigned char (which is a byte) type to be able to modify each byte separately.
Edit: changed according to comment.
If you are dealing with bytes then the best way to do this is to use a size specific type.
#include <algorithm>
#include <iterator>
#include <cinttypes>
#include <vector>
#include <fstream>
int main()
{
std::vector<int8_t> file_data;
std::ifstream file("file_name", std::ios::binary);
//read
std::copy(std::istream_iterator<int8_t>(file),
std::istream_iterator<int8_t>(),
std::back_inserter(file_data));
//write
std::ofstream out("outfile");
std::copy(file_data.begin(), file_data.end(),
std::ostream_iterator<int8_t>(out));
}
EDIT fixed bug
If you need to enforce how many bits are in an integer type, you need to be using the <stdint.h> header. It is present in both C and C++. It defines type such as uint8_t (8-bit unsigned integer), which are guaranteed to resolve to the proper type on the platform. It also tells other programmers who read your code that the number of bits is important.
If you're worrying about performance, you might want to use the larger-than-8-bits types, such as uint32_t. However, when reading and writing files, you will need to pay attention to the endianess of your system. Notably, if you have a little-endian system (e.g. x86, most all ARM), then the 32-bit value 0x12345678 will be written to the file as the four bytes 0x78 0x56 0x34 0x12, while if you have a big-endian system (e.g. Sparc, PowerPC, Cell, some ARM, and the Internet), it will be written as 0x12 0x34 0x56 0x78. (same goes or reading). You can, of course, work with 8-bit types and avoid this issue entirely.

C++: Datatypes, which to use and when?

I've been told that I should use size_t always when I want 32bit unsigned int, I don't quite understand why, but I think it has something to do with that if someone compiles the program on 16 or 64 bit machines, the unsigned int would become 16 or 64 bit but size_t won't, but why doesn't it? and how can I force the bit sizes to exactly what I want?
So, where is the list of which datatype to use and when? for example, is there a size_t alternative to unsigned short? or for 32bit int? etc. How can I be sure my datatypes have as many bits as I chose at the first place and not need to worry about different bit sizes on other machines?
Mostly I care more about the memory used rather than the marginal speed boost I get from doubling the memory usage, since I have not much RAM. So I want to stop worrying will everything break apart if my program is compiled on a machine that's not 32bit. For now I've used size_t always when i want it to be 32bit, but for short I don't know what to do. Someone help me to clear my head.
On the other hand: If I need 64 bit size variable, can I use it on a 32bit machine successfully? and what is that datatype name (if i want it to be 64bit always) ?
size_t is for storing object sizes. It is of exactly the right size for that and only that purpose - 4 bytes on 32-bit systems and 8 bytes on 64-bit systems. You shouldn't confuse it with unsigned int or any other datatype. It might be equivalent to unsigned int or might be not depending on the implementation (system bitness included).
Once you need to store something other than an object size you shouldn't use size_t and should instead use some other datatype.
As a side note: For containers, to indicate their size, don't use size_t, use container<...>::size_type
boost/cstdint.hpp can be used to be sure integers have right size.
size_t is not not necessarily 32-bit. It has been 16-bit with some compilers. It's 64-bit on a 64-bit system.
The C++ standard guarantees, via reference down to the C standard, that long is at least 32 bits.
int is only formally guaranteed 16 bits, but in practice I wouldn't worry: the chance that any ordinary code will be used on a 16-bit system is slim indeed, and on any 32-bit system int is 32-bit. Of course it's different if you're coding for a 16-bit system like some embedded computer. But in that case you'd probably be writing system-specific code anyway.
Where you need exact sizes you can use <stdint.h> if your compiler supports that header (it was introduced in C99, and the current C++ standard stems from 1998), or alternatively the corresponding Boost library header boost/cstdint.hpp.
However, in general, just use int. ;-)
Cheers & hth.,
size_t is not always 32-bit. E.g. It's 64-bit on 64-bit platforms.
For fixed-size integers, stdint.h is best. But it doesn't come with VS2008 or earlier - you have to download it separately. (It comes as a standard part of VS2010 and most other compilers).
Since you're using VS2008, you can use the MS-specific __int32, unsigned __int32 etc types. Documentation here.
To answer the 64-bit question: Most modern compilers have a 64-bit type, even on 32-bit systems. The compiler will do some magic to make it work. For Microsoft compilers, you can just use the __int64 or unsigned __int64 types.
Unfortunately, one of the quirks of the nature of data types is that it depends a great deal on which compiler you're using. Naturally, if you're only compiling for one target, there is no need to worry - just find out how large the type is using sizeof(...).
If you need to cross-compile, you could ensure compatibility by defining your own typedefs for each target (surrounded #ifdef blocks, referencing which target you're cross-compiling to).
If you're ever concerned that it could be compiled on a system that uses types with even weirder sizes than you have anticipated, you could always assert(sizeof(short)==2) or equivalent, so that you could guarantee at runtime that you're using the correctly sized types.
Your question is tagged visual-studio-2008, so I would recommend looking in the documentation for that compiler for pre-defined data types. Microsoft has a number that are predefined, such as BYTE, DWORD, and LARGE_INTEGER.
Take a look in windef.h winnt.h for more.