Range of pointer values on 64 bit systems - c++

Recently I was reading about the small string optimization (SSO): What are the mechanics of short string optimization in libc++?. As we know, a string typically consists of 3 pointers, which is 24 bytes on a 64 bit system. The linked answer says that in libc++'s implementation, the very first bit of the first pointer is used to indicate whether the string is in "long" or "short" mode, i.e. heap allocation and external storage vs internal storage of up to some 22 characters.
This however assumes however that the first bit of the first pointer cannot ever meaningfully be part of the address, because whenever the string is in "long" mode, that bit will always be set (or unset, depending which convention was chosen). This seems reasonable on its face, since with 64 bit pointers that allows 2^64 addresses, larger than 1 followed by 18 zeroes in bytes, or more than 1 billion gigabytes.
So this is reasonable, though not certain. My question is: is this guaranteed somewhere? And if it is guaranteed, where is it guaranteed? By the architecture spec, or by something else? To take it a step further: how many bits is it safe to do this with? I have a vague recollection reading somewhere that only 48 bits are used, but I don't recall.
If there are some number of bits, e.g. 8 or 16 that are guaranteed to be untouched, that is certainly something that could be leveraged in some interesting ways. It would be nice to exploit this, but not at the cost of having code mysteriously failure on some machine.

As we know, a string typically consists of 3 pointers, which is 24 bytes on a 64 bit system.
This is not true with libc++. The __long structure, for "long strings" is defined as:
struct __long
{
size_type __cap_;
size_type __size_;
pointer __data_;
};
The short flag therefore goes into the capacity field, making the whole thing moot.
As for pointer tagging, there is no universal guarantee about the size of a pointer. On x86_64, the data structures that the CPU uses for virtual address translation only use 48 bits (or 52 with physical address extension), so virtual addresses never use the upper 16 (or 12) bits. Additionally, most operating systems map their kernel into every process and reserve some amount of the high end of the address space for it, so in practice, user-mode pointers are even more restricted. On Windows, the most significant hardware-usable bit of a pointer tells whether it belongs to kernel-space or user-space.
These limits can change in the future and will vary across platforms, so it would be bad form to use them in a platform-independent standard library. In general, it's much better practice to use the least-significant bits for pointer tagging, since your application is in control of these.

The "long-bit" isn't part of a pointer, but of the capacity:
struct __long
{
size_type __cap_;
size_type __size_;
pointer __data_;
};
The "trick" is that if you always allocate an even number of characters and reserve one for the nul terminator, the resulting capacity will always be an odd number. And you get the 1-bit for free!

Related

Why booleans take a whole byte? [duplicate]

In C++,
Why is a boolean 1 byte and not 1 bit of size?
Why aren't there types like a 4-bit or 2-bit integers?
I'm missing out the above things when writing an emulator for a CPU
Because the CPU can't address anything smaller than a byte.
From Wikipedia:
Historically, a byte was the number of
bits used to encode a single character
of text in a computer and it is
for this reason the basic addressable
element in many computer
architectures.
So byte is the basic addressable unit, below which computer architecture cannot address. And since there doesn't (probably) exist computers which support 4-bit byte, you don't have 4-bit bool etc.
However, if you can design such an architecture which can address 4-bit as basic addressable unit, then you will have bool of size 4-bit then, on that computer only!
Back in the old days when I had to walk to school in a raging blizzard, uphill both ways, and lunch was whatever animal we could track down in the woods behind the school and kill with our bare hands, computers had much less memory available than today. The first computer I ever used had 6K of RAM. Not 6 megabytes, not 6 gigabytes, 6 kilobytes. In that environment, it made a lot of sense to pack as many booleans into an int as you could, and so we would regularly use operations to take them out and put them in.
Today, when people will mock you for having only 1 GB of RAM, and the only place you could find a hard drive with less than 200 GB is at an antique shop, it's just not worth the trouble to pack bits.
The easiest answer is; it's because the CPU addresses memory in bytes and not in bits, and bitwise operations are very slow.
However it's possible to use bit-size allocation in C++. There's std::vector specialization for bit vectors, and also structs taking bit sized entries.
Because a byte is the smallest addressible unit in the language.
But you can make bool take 1 bit for example if you have a bunch of them
eg. in a struct, like this:
struct A
{
bool a:1, b:1, c:1, d:1, e:1;
};
You could have 1-bit bools and 4 and 2-bit ints. But that would make for a weird instruction set for no performance gain because it's an unnatural way to look at the architecture. It actually makes sense to "waste" a better part of a byte rather than trying to reclaim that unused data.
The only app that bothers to pack several bools into a single byte, in my experience, is Sql Server.
You can use bit fields to get integers of sub size.
struct X
{
int val:4; // 4 bit int.
};
Though it is usually used to map structures to exact hardware expected bit patterns:
// 1 byte value (on a system where 8 bits is a byte)
struct SomThing
{
int p1:4; // 4 bit field
int p2:3; // 3 bit field
int p3:1; // 1 bit
};
bool can be one byte -- the smallest addressable size of CPU, or can be bigger. It's not unusual to have bool to be the size of int for performance purposes. If for specific purposes (say hardware simulation) you need a type with N bits, you can find a library for that (e.g. GBL library has BitSet<N> class). If you are concerned with size of bool (you probably have a big container,) then you can pack bits yourself, or use std::vector<bool> that will do it for you (be careful with the latter, as it doesn't satisfy container requirments).
Think about how you would implement this at your emulator level...
bool a[10] = {false};
bool &rbool = a[3];
bool *pbool = a + 3;
assert(pbool == &rbool);
rbool = true;
assert(*pbool);
*pbool = false;
assert(!rbool);
Because in general, CPU allocates memory with 1 byte as the basic unit, although some CPU like MIPS use a 4-byte word.
However vector deals bool in a special fashion, with vector<bool> one bit for each bool is allocated.
The byte is the smaller unit of digital data storage of a computer. In a computer the RAM has millions of bytes and anyone of them has an address. If it would have an address for every bit a computer could manage 8 time less RAM that what it can.
More info: Wikipedia
Even when the minimum size possible is 1 Byte, you can have 8 bits of boolean information on 1 Byte:
http://en.wikipedia.org/wiki/Bit_array
Julia language has BitArray for example, and I read about C++ implementations.
Bitwise operations are not 'slow'.
And/Or operations tend to be fast.
The problem is alignment and the simple problem of solving it.
CPUs as the answers partially-answered correctly are generally aligned to read bytes and RAM/memory is designed in the same way.
So data compression to use less memory space would have to be explicitly ordered.
As one answer suggested, you could order a specific number of bits per value in a struct. However what does the CPU/memory do afterward if it's not aligned? That would result in unaligned memory where instead of just +1 or +2, or +4, there's not +1.5 if you wanted to use half the size in bits in one value, etc. so it must anyway fill in or revert the remaining space as blank, then simply read the next aligned space, which are aligned by 1 at minimum and usually by default aligned by 4(32bit) or 8(64bit) overall. The CPU will generally then grab the byte value or the int value that contains your flags and then you check or set the needed ones. So you must still define memory as int, short, byte, or the proper sizes, but then when accessing and setting the value you can explicitly compress the data and store those flags in that value to save space; but many people are unaware of how it works, or skip the step whenever they have on/off values or flag present values, even though saving space in sent/recv memory is quite useful in mobile and other constrained enviornments. In the case of splitting an int into bytes it has little value, as you can just define the bytes individually (e.g. int 4Bytes; vs byte Byte1;byte Byte2; byte Byte3; byte Byte4;) in that case it is redundant to use int; however in virtual environments that are easier like Java, they might define most types as int (numbers, boolean, etc.) so thus in that case, you could take advantage of an int dividing it up and using bytes/bits for an ultra efficient app that has to send less integers of data (aligned by 4). As it could be said redundant to manage bits, however, it is one of many optimizations where bitwise operations are superior but not always needed; many times people take advantage of high memory constraints by just storing booleans as integers and wasting 'many magnitudes' 500%-1000% or so of memory space anyway. It still easily has its uses, if you use this among other optimizations, then on the go and other data streams that only have bytes or few kb of data flowing in, it makes the difference if overall you optimized everything to load on whether or not it will load,or load fast, at all in such cases, so reducing bytes sent could ultimately benefit you alot; even if you could get away with oversending tons of data not required to be sent in an every day internet connection or app. It is definitely something you should do when designing an app for mobile users and even something big time corporation apps fail at nowadays; using too much space and loading constraints that could be half or lower. The difference between not doing anything and piling on unknown packages/plugins that require at minumim many hundred KB or 1MB before it loads, vs one designed for speed that requires say 1KB or only fewKB, is going to make it load and act faster, as you will experience those users and people who have data constraints even if for you loading wasteful MB or thousand KB of unneeded data is fast.

Adding a 1 bit flag in a 64 bit pointer

If you define a union for traversing a data structure squentially (start and end offset) or via a pointer to a tree structure depending on the number of data elements on a 64 bit system where these unions are aligned with cache lines, is there a possibility of both adding a one bit flag at one of those 64 bits in order to know which traversal must be used and still allowing to reconstruct the right pointer?
union {
uint32_t offsets[2];
Tree<NodeData> * tree;
};
It's system dependent, but I don't think any 64-bit system really uses its full pointer-length yet.
Also, if you know your data is 2n-aligned, chances are those n bits are just sitting idle there (On some old systems they just would not exist. But I don't think any of those were 64-bit systems, and anyway they are no longer of interest).
As an example, x86_64 uses 48bits, the upper 16 must be the same as bit47. (sign-extended)
Another example, ARM64 uses 49bits (2 mappings of 48bit at the same time), so there you only have 15 bits left.
Just remember to corect the pilfered bits. (You might want to use uintptr_t instead of a pointer, and convert after the correction.)
Using a mis-aligned or impossible pointer causes behavior ranging from silent auto-correction, ranging over silent mis-behavior, to loud crashes.

Why is std::bitset<8> 4 bytes big?

It seems for std::bitset<1 to 32>, the size is set to 4 bytes. For sizes 33 to 64, it jumps straight up to 8 bytes. There can't be any overhead because std::bitset<32> is an even 4 bytes.
I can see aligning to byte length when dealing with bits, but why would a bitset need to align to word length, especially for a container most likely to be used in situations with a tight memory budget?
This is under VS2010.
The most likely explanation is that bitset is using a whole number of machine words to store the array.
This is probably done for memory bandwidth reasons: it is typically relatively cheap to read/write a word that's aligned at a word boundary. On the other hand, reading (and especially writing!) an arbitrarily-aligned byte can be expensive on some architectures.
Since we're talking about a fixed-sized penalty of a few bytes per bitset, this sounds like a reasonable tradeoff for a general-purpose library.
I assume that indexing into the bitset is done by grabbing a 32-bit value and then isolating the relevant bit because this is fastest in terms of processor instructions (working with smaller-sized values is slower on x86). The two indexes needed for this can also be calculated very quickly:
int wordIndex = (index & 0xfffffff8) >> 3;
int bitIndex = index & 0x7;
And then you can do this, which is also very fast:
int word = m_pStorage[wordIndex];
bool bit = ((word & (1 << bitIndex)) >> bitIndex) == 1;
Also, a maximum waste of 3 bytes per bitset is not exactly a memory concern IMHO. Consider that a bitset is already the most efficient data structure to store this type of information, so you would have to evaluate the waste as a percentage of the total structure size.
For 1025 bits this approach uses up 132 bytes instead of 129, for 2.3% overhead (and this goes down as the bitset site goes up). Sounds reasonable considering the likely performance benefits.
The memory system on modern machines cannot fetch anything else but words from memory, apart from some legacy functions that extract the desired bits. Hence, having the bitsets aligned to words makes them a lot faster to handle, because you do not need to mask out the bits you don't need when accessing it. If you do not mask, doing something like
bitset<4> foo = 0;
if (foo) {
// ...
}
will most likely fail. Apart from that, I remember reading some time ago that there was a way to cramp several bitsets together, but I don't remember exactly. I think it was when you have several bitsets together in a structure that they can take up "shared" memory, which is not applicable to most use cases of bitfields.
I had the same feature in Aix and Linux implementations. In Aix, internal bitset storage is char based:
typedef unsigned char _Ty;
....
_Ty _A[_Nw + 1];
In Linux, internal storage is long based:
typedef unsigned long _WordT;
....
_WordT _M_w[_Nw];
For compatibility reasons, we modified Linux version with char based storage
Check which implementation are you using inside bitset.h
Because a 32 bit Intel-compatible processor cannot access bytes individually (or better, it can by applying implicitly some bit mask and shifts) but only 32bit words at time.
if you declare
bitset<4> a,b,c;
even if the library implements it as char, a,b and c will be 32 bit aligned, so the same wasted space exist. But the processor will be forced to premask the bytes before letting bitset code to do its own mask.
For this reason MS used a int[1+(N-1)/32] as a container for the bits.
Maybe because it's using int by default, and switches to long long if it overflows? (Just a guess...)
If your std::bitset< 8 > was a member of a structure, you might have this:
struct A
{
std::bitset< 8 > mask;
void * pointerToSomething;
}
If bitset<8> was stored in one byte (and the structure packed on 1-byte boundaries) then the pointer following it in the structure would be unaligned, which would be A Bad Thing. The only time when it would be safe and useful to have a bitset<8> stored in one byte would be if it was in a packed structure and followed by some other one-byte fields with which it could be packed together. I guess this is too narrow a use case for it to be worthwhile providing a library implementation.
Basically, in your octree, a single byte bitset would only be useful if it was followed in a packed structure by another one to three single-byte members. Otherwise, it would have to be padded to four bytes anyway (on a 32-bit machine) to ensure that the following variable was word-aligned.

Why the size of a pointer is 4bytes in C++

On the 32-bit machine, why the size of a pointer is 32-bit? Why not 16-bit or 64-bit? What's the cons and pros?
Because it mimics the size of the actual "pointers" in assembler. On a machine with a 64 bit address bus, it will be 64 bits. In the old 6502, it was an 8 bit machine, but it had 16 bit address bus so that it could address 64K of memory. On most 32 bit machines, 32 bits were enough to address all the memory, so that's what the pointer size was in C++. I know that some of the early M68000 series chips only had a 24 bit memory address space, but it was addressed from a 32 bit register so even on those the pointer would be 32 bits.
In the bad old days of the 80286, it was worse - there was a 16 bit address register, and a 16 bit segment register. Some C++ compilers didn't hide that from you, and made you declare your pointers as near or far depending on whether you wanted to change the segment register. Mercifully, I've recycled most of those brain cells, so I forget if near pointers were 16 bits - but at the machine level, they would be.
The size of a pointer in C++ is implementation-defined. C++ might run on anything from your toaster's chip up to huge mainframes. Different architectures require different sizes of the data types.
If on your implementation a pointer is 32bit, then that's very likely an architecture which can address 2^32 bytes. (Note that even the size of bytes might be different depending on the implementation.) 64bit architectures generally can address 2^64 bytes, so implementations on these architectures will likely have a pointer size of 64bit.
16 bit would obviously be insufficient - you could only address 64K then.
Why not emulate 64 bit on 32 bit systems - I guess because the performance of pointer arithmetic would degrade.
As mentioned in many other answers, the size of a pointer need not be 32-bits - the implementation will set the size of a pointer to be whatever the architecture of the platform dictates. On a system with 64-bit addressing, the size of a pointer will generally be 64-bits.
However, you should also note that even on a single implementation, different types of pointers might have different sizes. In particular, pointer-to-member types (which I'll grant are odd-ball pointers) may have different sizes than plain-old pointers to objects.
The same is true about pointers to plain old functions - they might have a different size than pointers to objects (this applies to C as well as C++). However on modern desktop systems you'll usually find that pointers to functions are the same size as pointers to objects.
Here's a short example of fun with pointer-to-member-functions:
#include <stdio.h>
class A {};
class B {};
class VirtD: public virtual A, public virtual B {
public:
virtual int Dfunc() { return 5; };
};
typedef int (VirtD::* Derived_mfp)();
int main()
{
VirtD virtd;
Derived_mfp mfp = &VirtD::Dfunc;
printf( "sizeof( mfp) == %u\n", (unsigned int) sizeof( mfp));
}
Displays: sizeof( mfp) == 12 on MSVC.
The size of the pointer has little to do with the architecture(32bit, 64bit). 32bit usually refers to the fact that the register size is 32bit. As a result, the maximum possible number of address that you can address using one register is 2^32. So, it boils down to efficiency of addressing the memory slots using a register.
With a 32-bit pointer you can point to a wider range of memory than with 16-bit pointers. When 32-bit pointers were standardized, 64-bit CPUs were not very popular (or even existent?). Therefor a pointer would not be able to fit inside the CPU register, which is a very important factor for speed.
Why not 16-bit? Because, presuming a flat 32-bit address space, you cannot address every byte. Far from it: you can only address 216 unique locations with a 16-bit pointer. Even if your pointers only point to dwords and not bytes, this still leaves 1073676288 dwords unaddressable.
Assuming a flat 32-bit address space, you can already address every single byte with a 32-bit pointer. At this point, 64-bit pointers are just wasting space, unless you want to add additional information to each pointer. For example, on 32-bit PowerPC, a function descriptor is actually a 96-bit entity, with one third pointing to the executable code and the rest being data that helps make relocating modules easier.
In a segmented address space, having larger-than-32-bit pointers to data could be useful. Windows NT on the DEC Alpha was a 32-bit operating system, but the Alpha hardware was 64-bit capable. Your ordinary address space was still 32-bit, but there were special APIs to allow 32-bit programs to access 64-bit addresses, as if they were in otherwise-inaccessible segments.
To answer your question: C++ itself says very little about the size of a pointer, and certainly not that it has to be 32 bits or anything. The size of a pointer should be the natural one for the machine architecture.

C++ : why bool is 8 bits long?

In C++, I'm wondering why the bool type is 8 bits long (on my system), where only one bit is enough to hold the boolean value ?
I used to believe it was for performance reasons, but then on a 32 bits or 64 bits machine, where registers are 32 or 64 bits wide, what's the performance advantage ?
Or is it just one of these 'historical' reasons ?
Because every C++ data type must be addressable.
How would you create a pointer to a single bit? You can't. But you can create a pointer to a byte. So a boolean in C++ is typically byte-sized. (It may be larger as well. That's up to the implementation. The main thing is that it must be addressable, so no C++ datatype can be smaller than a byte)
Memory is byte addressable. You cannot address a single bit, without shifting or masking the byte read from memory. I would imagine this is a very large reason.
A boolean type normally follows the smallest unit of addressable memory of the target machine (i.e. usually the 8bits byte).
Access to memory is always in "chunks" (multiple of words, this is for efficiency at the hardware level, bus transactions): a boolean bit cannot be addressed "alone" in most CPU systems. Of course, once the data is contained in a register, there are often specialized instructions to manipulate bits independently.
For this reason, it is quite common to use techniques of "bit packing" in order to increase efficiency in using "boolean" base data types. A technique such as enum (in C) with power of 2 coding is a good example. The same sort of trick is found in most languages.
Updated: Thanks to a excellent discussion, it was brought to my attention that sizeof(char)==1 by definition in C++. Hence, addressing of a "boolean" data type is pretty tied to the smallest unit of addressable memory (reinforces my point).
The answers about 8-bits being the smallest amount of memory that is addressable are correct. However, some languages can use 1-bit for booleans, in a way. I seem to remember Pascal implementing sets as bit strings. That is, for the following set:
{1, 2, 5, 7}
You might have this in memory:
01100101
You can, of course, do something similar in C / C++ if you want. (If you're keeping track of a bunch of booleans, it could make sense, but it really depends on the situation.)
I know this is old but I thought I'd throw in my 2 cents.
If you limit your boolean or data type to one bit then your application is at risk for memory curruption. How do you handle error stats in memory that is only one bit long?
I went to a job interview and one of the statements the program lead said to me was, "When we send the signal to launch a missle we just send a simple one bit on off bit via wireless. Sending one bit is extremelly fast and we need that signal to be as fast as possible."
Well, it was a test to see if I understood the concepts and bits, bytes, and error handling. How easy would it for a bad guy to send out a one bit msg. Or what happens if during transmittion the bit gets flipped the other way.
Some embedded compilers have an int1 type that is used to bit-pack boolean flags (e.g. CCS series of C compilers for Microchip MPU's). Setting, clearing, and testing these variables uses single-instruction bit-level instructions, but the compiler will not permit any other operations (e.g. taking the address of the variable), for the reasons noted in other answers.
Note, however, that std::vector<bool> is allowed to use bit-packing, i.e. to store the bits in smaller units than an ordinary bool. But it is not required.