c++: working with bytes - c++

My problem is, that I need to load a binary file and work with single bits from the file. After that I need to save it out as bytes of course.
My main problem is - what datatype to choose to work in - char or long int? Can I somehow work with chars?

Unless performance is mission-critical here, use whatever makes your code easiest to understand and maintain.

Before beginning to code any thing make sure you understand endianess, c++ type sizes, and how strange they might be.
The unsigned char is the only type that is a fixed size (natural byte of the machine, normally 8 bits). So if you design for portability that is a safe bet. But it isn't hard to just use the unsigned int or even a long long to speed up the process and use size_of to find out how many bits you are getting in each read, although the code gets more complex that way.
You should know that for true portability none of the internal types of c++ is fixed. An unsigned char might have 9 bits, and the int might be as small as in the range of 0 to 65535, as noted in this and this answer
Another alternative, as user1200129 suggests, is to use the boost integer library to reduce all these uncertainties. This is if you have boost available on your platform. Although if going for external libraries there are many serializing libraries to choose from.
But first and foremost before even start optimizing, make something simple that work. Then you can start profiling when you start experiencing timing issues.

It really just depends on what you are wanting to do, but I would say in general, the best speed will be to stick with the size of integers that your program is compiled in. So if you have a 32 bit program, then choose 32 bit integers, and if you have 64 bit, choose 64 bit.
This could be different if there are some bytes in your file, or if there are integers. Without knowing the exact structure of your file, it's difficult to determine what the optimal value is.

Your sentences are not really correct English, but as far as I can interpret the question you can beter use unsigned char (which is a byte) type to be able to modify each byte separately.
Edit: changed according to comment.

If you are dealing with bytes then the best way to do this is to use a size specific type.
#include <algorithm>
#include <iterator>
#include <cinttypes>
#include <vector>
#include <fstream>
int main()
{
std::vector<int8_t> file_data;
std::ifstream file("file_name", std::ios::binary);
//read
std::copy(std::istream_iterator<int8_t>(file),
std::istream_iterator<int8_t>(),
std::back_inserter(file_data));
//write
std::ofstream out("outfile");
std::copy(file_data.begin(), file_data.end(),
std::ostream_iterator<int8_t>(out));
}
EDIT fixed bug

If you need to enforce how many bits are in an integer type, you need to be using the <stdint.h> header. It is present in both C and C++. It defines type such as uint8_t (8-bit unsigned integer), which are guaranteed to resolve to the proper type on the platform. It also tells other programmers who read your code that the number of bits is important.
If you're worrying about performance, you might want to use the larger-than-8-bits types, such as uint32_t. However, when reading and writing files, you will need to pay attention to the endianess of your system. Notably, if you have a little-endian system (e.g. x86, most all ARM), then the 32-bit value 0x12345678 will be written to the file as the four bytes 0x78 0x56 0x34 0x12, while if you have a big-endian system (e.g. Sparc, PowerPC, Cell, some ARM, and the Internet), it will be written as 0x12 0x34 0x56 0x78. (same goes or reading). You can, of course, work with 8-bit types and avoid this issue entirely.

Related

Advantages/Disadvantages of using __int16 (or int16_t) over int

As far as I understand, the number of bytes used for int is system dependent. Usually, 2 or 4 bytes are used for int.
As per Microsoft's documentation, __int8, __int16, __int32 and __int64 are Microsoft Specific keywords. Furthermore, __int16 uses 16-bits (i.e. 2 bytes).
Question: What are advantage/disadvantage of using __int16 (or int16_t)? For example, if I am sure that the value of my integer variable will never need more than 16 bits then, will it be beneficial to declare the variable as __int16 var (or int16_t var)?
UPDATE: I see that several comments/answers suggest using int16_t instead of __int16, which is a good suggestion but not really an advantage/disadvantage of using __int16. Basically, my question is, what is the advantage/disadvantage of saving 2 bytes by using 16-bit version of an integer instead of int ?
Saving 2 bytes is almost never worth it. However, saving thousands of bytes is. If you have an large array containing integers, using a small integer type can save quite a lot of memory. This leads to faster code, because the less memory one uses the less cache misses one receives (cache misses are a major loss of performance).
TL;DR: this is beneficial to do in large arrays, but pointless for 1-off variables.
The second use of these is if for dealing with binary files and messages. If you are reading a binary file that uses 16-bit integers, well, it's pretty convenient if you can represent that type exactly in your code.
BTW, don't use microsoft's versions. Use the standard versions (std::int16_t)
It depends.
On x86, primitive types are generally aligned on their size. So 2-byte types would be aligned on a 2-byte boundary. This is useful when you have more than one of these short variables, because you will be saving 50% of space. That directly translates to better memory and cache utilization and thus theoretically, better performance.
On the other hand, doing arithmetic on shorter-than-int types usually involves widening conversion to int. So if you do a lot of arithmetic on these types, using int types might result in better performance (contrived example).
So if you care about performance of a critical section of code, profile it to find out for sure if using a certain data type is faster or slower.
A possible rule of thumb would be - if you're memory-bound (i.e. you have lots of variables and especially arrays), use as short a data types as possible. If not - don't worry about it and use int types.
If you for some reason just need a shorter integer type it's already have that in the language - called short - unless you know you need exactly 16 bits there's really no good reason not to just stick with the agnostic short and int types. The broad idea is that these types should align well the target architecture (for example see word ).
That being said, theres no need to use the platform specific type (__int16), you can just use the standard one:
int16_t
See https://en.cppreference.com/w/cpp/types/integer for more information and standard types
Even if you still insist on __int16 you probably want a typedef something ala.:
using my_short = __int16;
Update
Your main question is:
What is the advantage/disadvantage of
saving 2 bytes by using 16-bit version of an integer instead of int ?
If you have a lot of data (In the ballpark of at least some 100.000-1.000.000 elements as a rule of thumb) - then there could be an overall performance saving in terms of using less cpu-cache. Overall there's no disadvantage of using a smaller type - except for the obvious one - and possible conversions as explained in this answer
The main reason for using these types is to make sure about the size of your variable in different architectures and compilers. we call it "code reusability" and "portability"
in higher-level modern languages, all this will handle with compiler/interpreter/virtual machine/etc. that you don't need to worry about, but it has some performance and memory usage costs.
When you have some kind of limitation you may need to optimize everything. The best example is embedded systems that have a very limited size of memory and work at low frequency. In the other hand, there are lots of compilers out there with different implementations. Some of them interpret "int" as a "16bit" value and some as a "32bit".
for example, you receive and specific stream of values over a communication system, you want to save them in a buffer or array and you want to make sure the input data is always interpreted as a 16bit noting else.

C++ Binary Writing/Reading on 32bit to/from 64bit

If you have a binary output stream, and write integers to a file on a 32-bit Windows computer. Would you then be able to read the same integers from that same file on a 64-bit Windows computer?
My guess would be no. Since an integer on a 32-bit computer is 4 bytes, where an integer on a 64-bit computer is 8 bytes.
So does the following code work, while the files have to be able to be read and written from and by both 64-bit and 32-bit computers, no matter the OS, computer architecture and data type. If not how would one be able to do that, while the files have to be in binary form.
Writing
std::ofstream ofs("example.bin", std::ios::binary);
int i = 128;
ofs.write((char*) (&i), sizeof(i));
ofs.close();
Reading
std::ifstream ifs("example.bin", std::ios::binary);
int i = 0;
ifs.read((char*) (&i), sizeof(i));
ifs.close();
While int is 4 bytes on almost all modern platforms (32bit and 64bit), there is no guarantee for its size. So for serializing data into a file or other binary streams, you should prefer fixed width integer types from the header <cstdint> which were introduced in C++11 (some compilers support it in C++03):
#include <cstdint>
...
int32_t i = 128;
ofs.write((char*)(&i), sizeof(i));
...
Another option is to enforce a certain type to have a certain size, e.g. int to have size 4. To make sure your program won't compile if this was not true, use static_assert:
...
int i = 128;
static_assert(sizeof(i) == 4, "Field i has to have size 4.");
ofs.write((char*)(&i), sizeof(i));
...
While this sounds stupid considering we have fixed width integers as above, this might be useful if you want to store a whole struct of which you made assumptions in a certain version of some library. Example: vec4 from glm is documented to contain four floats, so when serializing this struct, it's good to check this statically in order to catch future library changes (unlikely but possible).
Another very important thing to consider however is the endianess of integral types, which varies among platforms. Most compilers for modern x86 desktop platforms use little endian for integral types, so I'd prefer this for your binary file format; but if the platform uses big endian you need to convert it (reverse the byte order).
There's no guarantee for the size of an int in C++. All you know is that it will be at least as big as a short int and no larger than a long int. The compiler is free to choose an appropriate size within these constraints. While most will choose 32-bits as the size of an int, some won't.
If you know your type is always 32-bits then you can use the int32_t type.
include <stdint.h>
to get this type.

Doing serialization in c++

I was reading this page about serialization in c++.
http://www.parashift.com/c++-faq-lite/serialize-binary-format.html
Third bullet got me confused (the one that starts with: "If the binary data might get read by a different computer than the one that wrote it, be very careful about endian issues (little-endian vs. big-endian) and sizeof issues") which also mentions: "header file that contains machine dependencies (I usually call it machine.h)".
What are these endiannes and sizeof issues? (sizeof probably being that on one machine int can be 4 bytes while on another for example less bytes right?).
How would that machine.h file look like?
Is there some tutorial on internet which explains all these things, in an understandable way?
Sometimes in some source codes I also encounter typedefs like:
typedef unsigned long long u64_t;
is it related somehow to that machine.h file?
sizeof: on one architecture long is 64 bits on another 32 bits.
endianness: let's assume that 4-byte long. The 4 bytes can be placed in different order in memory, say on intel the least significant bits are at the lowest address, on motorola or sparc the order is the opposite, but there can be processors with 2301 order too.

Forcing types to a specific size

I've been learning C++ and one thing that I'm not really comfortable with is the fact that datatype sizes are not consistent. Depending on what system something is deployed on an int could be 16 bits or 32 bits, etc.
So I was thinking it might be a good idea to make my own header file with data types like byte, word, etc. that are defined to be a specific size and will maintain that size on any platform.
Two questions. First is this a good idea? Or is it going to create other problems I'm not aware of? Second, how do you define a type as being, say, 8 bits? I can't just say #define BYTE char, cause char would vary across platforms.
Fortunately, other people have noticed this same problem. In C99 and C++11 (so set your compiler to compatibility with one of those two modes, there should be a switch in your compiler settings), they added the header stdint.h (for C) and cstdint (for C++). If you #include <cstdint>, you get the types int8_t, int16_t, int32_t, int64_t, and the same prefixed with a u for unsigned versions. If your platform supports those types, they will be defined in the header, along with several others.
If your compiler does not yet support that standard (or you are forced by reasons out of your control to remain on C++03), then there is also Boost.
However, you should only use this if you care exactly about the size of the type. int and unsigned are fine for throw-away variables in most cases. size_t should be used for indexing std::vector, etc.
First you need to figure out if you really care what sizes things are. If you are using an int to count the number of lines in a file, do you really care if it's 32-bit or 64? You need BYTE, WORD, etc if you are working with packed binary data, but generally not for any other reason. So you may be worrying over something that doesn't really matter.
Better yet, use the already defined stuff in stdint.h. See here for more details. Similar question here.
Example:
int32_t is always 32 bits.
Many libraries have their own .h with a lots of typedef to have constant size types. This is useful when making portable code, and avoid relying on the headers of the platform you are currently working with.
If you only want to make sure the builtin data types have a minimum size you can use std::numeric_limits in the header to check.
std::numeric_limits<int>::digits
will give you, for example, the number of bits of an int without the sign bit. And
std::numeric_limits<int>::max()
will give you the max value.

C++: Datatypes, which to use and when?

I've been told that I should use size_t always when I want 32bit unsigned int, I don't quite understand why, but I think it has something to do with that if someone compiles the program on 16 or 64 bit machines, the unsigned int would become 16 or 64 bit but size_t won't, but why doesn't it? and how can I force the bit sizes to exactly what I want?
So, where is the list of which datatype to use and when? for example, is there a size_t alternative to unsigned short? or for 32bit int? etc. How can I be sure my datatypes have as many bits as I chose at the first place and not need to worry about different bit sizes on other machines?
Mostly I care more about the memory used rather than the marginal speed boost I get from doubling the memory usage, since I have not much RAM. So I want to stop worrying will everything break apart if my program is compiled on a machine that's not 32bit. For now I've used size_t always when i want it to be 32bit, but for short I don't know what to do. Someone help me to clear my head.
On the other hand: If I need 64 bit size variable, can I use it on a 32bit machine successfully? and what is that datatype name (if i want it to be 64bit always) ?
size_t is for storing object sizes. It is of exactly the right size for that and only that purpose - 4 bytes on 32-bit systems and 8 bytes on 64-bit systems. You shouldn't confuse it with unsigned int or any other datatype. It might be equivalent to unsigned int or might be not depending on the implementation (system bitness included).
Once you need to store something other than an object size you shouldn't use size_t and should instead use some other datatype.
As a side note: For containers, to indicate their size, don't use size_t, use container<...>::size_type
boost/cstdint.hpp can be used to be sure integers have right size.
size_t is not not necessarily 32-bit. It has been 16-bit with some compilers. It's 64-bit on a 64-bit system.
The C++ standard guarantees, via reference down to the C standard, that long is at least 32 bits.
int is only formally guaranteed 16 bits, but in practice I wouldn't worry: the chance that any ordinary code will be used on a 16-bit system is slim indeed, and on any 32-bit system int is 32-bit. Of course it's different if you're coding for a 16-bit system like some embedded computer. But in that case you'd probably be writing system-specific code anyway.
Where you need exact sizes you can use <stdint.h> if your compiler supports that header (it was introduced in C99, and the current C++ standard stems from 1998), or alternatively the corresponding Boost library header boost/cstdint.hpp.
However, in general, just use int. ;-)
Cheers & hth.,
size_t is not always 32-bit. E.g. It's 64-bit on 64-bit platforms.
For fixed-size integers, stdint.h is best. But it doesn't come with VS2008 or earlier - you have to download it separately. (It comes as a standard part of VS2010 and most other compilers).
Since you're using VS2008, you can use the MS-specific __int32, unsigned __int32 etc types. Documentation here.
To answer the 64-bit question: Most modern compilers have a 64-bit type, even on 32-bit systems. The compiler will do some magic to make it work. For Microsoft compilers, you can just use the __int64 or unsigned __int64 types.
Unfortunately, one of the quirks of the nature of data types is that it depends a great deal on which compiler you're using. Naturally, if you're only compiling for one target, there is no need to worry - just find out how large the type is using sizeof(...).
If you need to cross-compile, you could ensure compatibility by defining your own typedefs for each target (surrounded #ifdef blocks, referencing which target you're cross-compiling to).
If you're ever concerned that it could be compiled on a system that uses types with even weirder sizes than you have anticipated, you could always assert(sizeof(short)==2) or equivalent, so that you could guarantee at runtime that you're using the correctly sized types.
Your question is tagged visual-studio-2008, so I would recommend looking in the documentation for that compiler for pre-defined data types. Microsoft has a number that are predefined, such as BYTE, DWORD, and LARGE_INTEGER.
Take a look in windef.h winnt.h for more.