uint64_t writes in 32 bit machine - c++

When 2 process communicating via shared memory on 32 bit solaris i386 Arch
Is it guaranteed that for the value of uint64_t datatype, (value < 2^32) is written in single memory location and (value > 2^32 ) is written in 2 memory location?
Is the 32 bit memory read is atomic.?

A 64-bit value is always written into 64 bits of memory!1 The write is almost certainly not atomic (unless the architecture explicitly guarantees this).
1. Except, of course, when it's not written to memory at all (i.e. when there's no register spill. But that's beside the point.

Related

Why does my compiler use an 8-bit char when I'm running on a 64-bit machine?

I am using the Microsoft Visual Studio 2013 IDE. When I compile a program in C++ while using the header <climits>, I output the macro constant CHAR_BIT to the screen. It tells me there are 8-bits in my char data type (which is 1-byte in C++). However, Visual Studio is a 32-bit application and I am running it on a 64-bit machine (i.e. a machine whose processor has a 64-bit instruction set and operating system is 64-bit Windows 7).
I don't understand why my char data type uses only 8-bits. Shouldn't it be using at least 32-bits (since my IDE is a 32-bit application), let alone 64-bits (since I'm compiling on a 64-bit machine)?
I am told that the number of bits used in a memory address (1-byte) depends on the hardware and implementation. If that's the case, why does my memory address still only use 8-bits and not more?
I think you are confusing memory address bit-width with data value bit-width. Memory addresses (pointers) are 32 bits for 32-bit programs and 64 bits for 64-bit programs. But data types have different widths for their values depending on type (as governed by the standard). So a char is 8-bits, but a char* will be 32-bits if you are compiling as a 32-bit application (also note here it depends on how you compile the application and not what type of processor or OS you are running on).
Edit for questions:
However, what is the relationship between these two?
Memory addresses will always have the same bit width regardless of what data value is stored there.
For example, if I have a 32-bit address and I assign an 8-bit value to that address, does that mean there are 24-bits of unused address space?
Some code (assume 32-bit compilation):
char i_am_1_byte = 0x00; // an 8-bit data value that lives in memory
char* i_am_a_ptr = &i_am_1_byte; // pointer is 32-bits and points to an 8-bit data value
*i_am_a_ptr = 0xFF; // writes 0xFF to the location pointed to by the pointer
// that is, to i_am_1_byte
So we have i_am_1_byte which is a char and takes up 8 bits somewhere in memory. We can get this memory location using the address-of operator & and store it in the pointer variable i_am_a_ptr, which is your 32-bit address. We can write 8 bits of data to the location pointed to be i_am_a_ptr by dereferencing it.
If not, what is the bit-width for memory address actually used for
All the data that your program uses must be located somewhere in memory and each location has an address. Most programs probably will not use most of the memory available for them to use, but we need a way to address every possible location.
how can having more memory address bit-width be helpful?
That depends on how much data you need to work with. A 32-bit program, at most, can address 4GB of memory space (and this may be smaller depending on your OS). That used to be a very, very large amount of memory, but these days it is conceivable a program could run out. It is also a lot easier for the CPU to address more the 4GB of RAM if it is 64-bit (this gets into the difference between physical memory and virtual memory). Of course, 64-bit architecture means a lot more than just bigger addresses and brings many benefits that may be more useful to programs than the bigger memory space.
An interesting fact is that on some processors, such as 32-bit ARM, mostly of their instructions are word aligned. That is, compilers tend to allocate 32-bits (4 bytes) to any data type, even though the data type used needs less than 4 bytes unless otherwise stated in the source code. This happens because ARM architectures are optimized to memory access using word alignment.

Adding a 1 bit flag in a 64 bit pointer

If you define a union for traversing a data structure squentially (start and end offset) or via a pointer to a tree structure depending on the number of data elements on a 64 bit system where these unions are aligned with cache lines, is there a possibility of both adding a one bit flag at one of those 64 bits in order to know which traversal must be used and still allowing to reconstruct the right pointer?
union {
uint32_t offsets[2];
Tree<NodeData> * tree;
};
It's system dependent, but I don't think any 64-bit system really uses its full pointer-length yet.
Also, if you know your data is 2n-aligned, chances are those n bits are just sitting idle there (On some old systems they just would not exist. But I don't think any of those were 64-bit systems, and anyway they are no longer of interest).
As an example, x86_64 uses 48bits, the upper 16 must be the same as bit47. (sign-extended)
Another example, ARM64 uses 49bits (2 mappings of 48bit at the same time), so there you only have 15 bits left.
Just remember to corect the pilfered bits. (You might want to use uintptr_t instead of a pointer, and convert after the correction.)
Using a mis-aligned or impossible pointer causes behavior ranging from silent auto-correction, ranging over silent mis-behavior, to loud crashes.

Why is std::bitset<8> 4 bytes big?

It seems for std::bitset<1 to 32>, the size is set to 4 bytes. For sizes 33 to 64, it jumps straight up to 8 bytes. There can't be any overhead because std::bitset<32> is an even 4 bytes.
I can see aligning to byte length when dealing with bits, but why would a bitset need to align to word length, especially for a container most likely to be used in situations with a tight memory budget?
This is under VS2010.
The most likely explanation is that bitset is using a whole number of machine words to store the array.
This is probably done for memory bandwidth reasons: it is typically relatively cheap to read/write a word that's aligned at a word boundary. On the other hand, reading (and especially writing!) an arbitrarily-aligned byte can be expensive on some architectures.
Since we're talking about a fixed-sized penalty of a few bytes per bitset, this sounds like a reasonable tradeoff for a general-purpose library.
I assume that indexing into the bitset is done by grabbing a 32-bit value and then isolating the relevant bit because this is fastest in terms of processor instructions (working with smaller-sized values is slower on x86). The two indexes needed for this can also be calculated very quickly:
int wordIndex = (index & 0xfffffff8) >> 3;
int bitIndex = index & 0x7;
And then you can do this, which is also very fast:
int word = m_pStorage[wordIndex];
bool bit = ((word & (1 << bitIndex)) >> bitIndex) == 1;
Also, a maximum waste of 3 bytes per bitset is not exactly a memory concern IMHO. Consider that a bitset is already the most efficient data structure to store this type of information, so you would have to evaluate the waste as a percentage of the total structure size.
For 1025 bits this approach uses up 132 bytes instead of 129, for 2.3% overhead (and this goes down as the bitset site goes up). Sounds reasonable considering the likely performance benefits.
The memory system on modern machines cannot fetch anything else but words from memory, apart from some legacy functions that extract the desired bits. Hence, having the bitsets aligned to words makes them a lot faster to handle, because you do not need to mask out the bits you don't need when accessing it. If you do not mask, doing something like
bitset<4> foo = 0;
if (foo) {
// ...
}
will most likely fail. Apart from that, I remember reading some time ago that there was a way to cramp several bitsets together, but I don't remember exactly. I think it was when you have several bitsets together in a structure that they can take up "shared" memory, which is not applicable to most use cases of bitfields.
I had the same feature in Aix and Linux implementations. In Aix, internal bitset storage is char based:
typedef unsigned char _Ty;
....
_Ty _A[_Nw + 1];
In Linux, internal storage is long based:
typedef unsigned long _WordT;
....
_WordT _M_w[_Nw];
For compatibility reasons, we modified Linux version with char based storage
Check which implementation are you using inside bitset.h
Because a 32 bit Intel-compatible processor cannot access bytes individually (or better, it can by applying implicitly some bit mask and shifts) but only 32bit words at time.
if you declare
bitset<4> a,b,c;
even if the library implements it as char, a,b and c will be 32 bit aligned, so the same wasted space exist. But the processor will be forced to premask the bytes before letting bitset code to do its own mask.
For this reason MS used a int[1+(N-1)/32] as a container for the bits.
Maybe because it's using int by default, and switches to long long if it overflows? (Just a guess...)
If your std::bitset< 8 > was a member of a structure, you might have this:
struct A
{
std::bitset< 8 > mask;
void * pointerToSomething;
}
If bitset<8> was stored in one byte (and the structure packed on 1-byte boundaries) then the pointer following it in the structure would be unaligned, which would be A Bad Thing. The only time when it would be safe and useful to have a bitset<8> stored in one byte would be if it was in a packed structure and followed by some other one-byte fields with which it could be packed together. I guess this is too narrow a use case for it to be worthwhile providing a library implementation.
Basically, in your octree, a single byte bitset would only be useful if it was followed in a packed structure by another one to three single-byte members. Otherwise, it would have to be padded to four bytes anyway (on a 32-bit machine) to ensure that the following variable was word-aligned.

Why the size of a pointer is 4bytes in C++

On the 32-bit machine, why the size of a pointer is 32-bit? Why not 16-bit or 64-bit? What's the cons and pros?
Because it mimics the size of the actual "pointers" in assembler. On a machine with a 64 bit address bus, it will be 64 bits. In the old 6502, it was an 8 bit machine, but it had 16 bit address bus so that it could address 64K of memory. On most 32 bit machines, 32 bits were enough to address all the memory, so that's what the pointer size was in C++. I know that some of the early M68000 series chips only had a 24 bit memory address space, but it was addressed from a 32 bit register so even on those the pointer would be 32 bits.
In the bad old days of the 80286, it was worse - there was a 16 bit address register, and a 16 bit segment register. Some C++ compilers didn't hide that from you, and made you declare your pointers as near or far depending on whether you wanted to change the segment register. Mercifully, I've recycled most of those brain cells, so I forget if near pointers were 16 bits - but at the machine level, they would be.
The size of a pointer in C++ is implementation-defined. C++ might run on anything from your toaster's chip up to huge mainframes. Different architectures require different sizes of the data types.
If on your implementation a pointer is 32bit, then that's very likely an architecture which can address 2^32 bytes. (Note that even the size of bytes might be different depending on the implementation.) 64bit architectures generally can address 2^64 bytes, so implementations on these architectures will likely have a pointer size of 64bit.
16 bit would obviously be insufficient - you could only address 64K then.
Why not emulate 64 bit on 32 bit systems - I guess because the performance of pointer arithmetic would degrade.
As mentioned in many other answers, the size of a pointer need not be 32-bits - the implementation will set the size of a pointer to be whatever the architecture of the platform dictates. On a system with 64-bit addressing, the size of a pointer will generally be 64-bits.
However, you should also note that even on a single implementation, different types of pointers might have different sizes. In particular, pointer-to-member types (which I'll grant are odd-ball pointers) may have different sizes than plain-old pointers to objects.
The same is true about pointers to plain old functions - they might have a different size than pointers to objects (this applies to C as well as C++). However on modern desktop systems you'll usually find that pointers to functions are the same size as pointers to objects.
Here's a short example of fun with pointer-to-member-functions:
#include <stdio.h>
class A {};
class B {};
class VirtD: public virtual A, public virtual B {
public:
virtual int Dfunc() { return 5; };
};
typedef int (VirtD::* Derived_mfp)();
int main()
{
VirtD virtd;
Derived_mfp mfp = &VirtD::Dfunc;
printf( "sizeof( mfp) == %u\n", (unsigned int) sizeof( mfp));
}
Displays: sizeof( mfp) == 12 on MSVC.
The size of the pointer has little to do with the architecture(32bit, 64bit). 32bit usually refers to the fact that the register size is 32bit. As a result, the maximum possible number of address that you can address using one register is 2^32. So, it boils down to efficiency of addressing the memory slots using a register.
With a 32-bit pointer you can point to a wider range of memory than with 16-bit pointers. When 32-bit pointers were standardized, 64-bit CPUs were not very popular (or even existent?). Therefor a pointer would not be able to fit inside the CPU register, which is a very important factor for speed.
Why not 16-bit? Because, presuming a flat 32-bit address space, you cannot address every byte. Far from it: you can only address 216 unique locations with a 16-bit pointer. Even if your pointers only point to dwords and not bytes, this still leaves 1073676288 dwords unaddressable.
Assuming a flat 32-bit address space, you can already address every single byte with a 32-bit pointer. At this point, 64-bit pointers are just wasting space, unless you want to add additional information to each pointer. For example, on 32-bit PowerPC, a function descriptor is actually a 96-bit entity, with one third pointing to the executable code and the rest being data that helps make relocating modules easier.
In a segmented address space, having larger-than-32-bit pointers to data could be useful. Windows NT on the DEC Alpha was a 32-bit operating system, but the Alpha hardware was 64-bit capable. Your ordinary address space was still 32-bit, but there were special APIs to allow 32-bit programs to access 64-bit addresses, as if they were in otherwise-inaccessible segments.
To answer your question: C++ itself says very little about the size of a pointer, and certainly not that it has to be 32 bits or anything. The size of a pointer should be the natural one for the machine architecture.

Is the sizeof(some pointer) always equal to four? [duplicate]

This question already has answers here:
Do all pointers have the same size in C++?
(10 answers)
Closed 8 months ago.
For example:
sizeof(char*) returns 4. As does int*, long long*, everything that I've tried. Are there any exceptions to this?
The guarantee you get is that sizeof(char) == 1. There are no other guarantees, including no guarantee that sizeof(int *) == sizeof(double *).
In practice, pointers will be size 2 on a 16-bit system (if you can find one), 4 on a 32-bit system, and 8 on a 64-bit system, but there's nothing to be gained in relying on a given size.
Even on a plain x86 32 bit platform, you can get a variety of pointer sizes, try this out for an example:
struct A {};
struct B : virtual public A {};
struct C {};
struct D : public A, public C {};
int main()
{
cout << "A:" << sizeof(void (A::*)()) << endl;
cout << "B:" << sizeof(void (B::*)()) << endl;
cout << "D:" << sizeof(void (D::*)()) << endl;
}
Under Visual C++ 2008, I get 4, 12 and 8 for the sizes of the pointers-to-member-function.
Raymond Chen talked about this here.
Just another exception to the already posted list. On 32-bit platforms, pointers can take 6, not 4, bytes:
#include <stdio.h>
#include <stdlib.h>
int main() {
char far* ptr; // note that this is a far pointer
printf( "%d\n", sizeof( ptr));
return EXIT_SUCCESS;
}
If you compile this program with Open Watcom and run it, you'll get 6, because far pointers that it supports consist of 32-bit offset and 16-bit segment values
if you are compiling for a 64-bit machine, then it may be 8.
Technically speaking, the C standard only guarantees that sizeof(char) == 1, and the rest is up to the implementation. But on modern x86 architectures (e.g. Intel/AMD chips) it's fairly predictable.
You've probably heard processors described as being 16-bit, 32-bit, 64-bit, etc. This usually means that the processor uses N-bits for integers. Since pointers store memory addresses, and memory addresses are integers, this effectively tells you how many bits are going to be used for pointers. sizeof is usually measured in bytes, so code compiled for 32-bit processors will report the size of pointers to be 4 (32 bits / 8 bits per byte), and code for 64-bit processors will report the size of pointers to be 8 (64 bits / 8 bits per byte). This is where the limitation of 4GB of RAM for 32-bit processors comes from -- if each memory address corresponds to a byte, to address more memory you need integers larger than 32-bits.
The size of the pointer basically depends on the architecture of the system in which it is implemented. For example the size of a pointer in 32 bit is 4 bytes (32 bit ) and 8 bytes(64 bit ) in a 64 bit machines. The bit types in a machine are nothing but memory address, that it can have. 32 bit machines can have 2^32 address space and 64 bit machines can have upto 2^64 address spaces. So a pointer (variable which points to a memory location) should be able to point to any of the memory address (2^32 for 32 bit and 2^64 for 64 bit) that a machines holds.
Because of this reason we see the size of a pointer to be 4 bytes in 32 bit machine and 8 bytes in a 64 bit machine.
In addition to the 16/32/64 bit differences even odder things can occur.
There have been machines where sizeof(int *) will be one value, probably 4 but where sizeof(char *) is larger. Machines that naturally address words instead of bytes have to "augment" character pointers to specify what portion of the word you really want in order to properly implement the C/C++ standard.
This is now very unusual as hardware designers have learned the value of byte addressability.
8 bit and 16 bit pointers are used in most low profile microcontrollers. That means every washing machine, micro, fridge, older TVs, and even cars.
You could say these have nothing to do with real world programming.
But here is one real world example:
Arduino with 1-2-4k ram (depending on chip) with 2 byte pointers.
It's recent, cheap, accessible for everyone and worths coding for.
In addition to what people have said about 64-bit (or whatever) systems, there are other kinds of pointer than pointer-to-object.
A pointer-to-member might be almost any size, depending how they're implemented by your compiler: they aren't necessarily even all the same size. Try a pointer-to-member of a POD class, and then a pointer-to-member inherited from one of the base classes of a class with multiple bases. What fun.
From what I recall, it's based on the size of a memory address. So on a system with a 32-bit address scheme, sizeof will return 4, since that's 4 bytes.
In general, sizeof(pretty much anything) will change when you compile on different platforms. On a 32 bit platform, pointers are always the same size. On other platforms (64 bit being the obvious example) this can change.
No, the size of a pointer may vary depending on the architecture. There are numerous exceptions.
Size of pointer and int is 2 bytes in Turbo C compiler on windows 32 bit machine.
So size of pointer is compiler specific. But generally most of the compilers are implemented to support 4 byte pointer variable in 32 bit and 8 byte pointer variable in 64 bit machine).
So size of pointer is not same in all machines.
In Win64 (Cygwin GCC 5.4), let's see the below example:
First, test the following struct:
struct list_node{
int a;
list_node* prev;
list_node* next;
};
struct test_struc{
char a, b;
};
The test code is below:
std::cout<<"sizeof(int): "<<sizeof(int)<<std::endl;
std::cout<<"sizeof(int*): "<<sizeof(int*)<<std::endl;
std::cout<<std::endl;
std::cout<<"sizeof(double): "<<sizeof(double)<<std::endl;
std::cout<<"sizeof(double*): "<<sizeof(double*)<<std::endl;
std::cout<<std::endl;
std::cout<<"sizeof(list_node): "<<sizeof(list_node)<<std::endl;
std::cout<<"sizeof(list_node*): "<<sizeof(list_node*)<<std::endl;
std::cout<<std::endl;
std::cout<<"sizeof(test_struc): "<<sizeof(test_struc)<<std::endl;
std::cout<<"sizeof(test_struc*): "<<sizeof(test_struc*)<<std::endl;
The output is below:
sizeof(int): 4
sizeof(int*): 8
sizeof(double): 8
sizeof(double*): 8
sizeof(list_node): 24
sizeof(list_node*): 8
sizeof(test_struc): 2
sizeof(test_struc*): 8
You can see that in 64-bit, sizeof(pointer) is 8.
The reason the size of your pointer is 4 bytes is because you are compiling for a 32-bit architecture. As FryGuy pointed out, on a 64-bit architecture you would see 8.
A pointer is just a container for an address. On a 32 bit machine, your address range is 32 bits, so a pointer will always be 4 bytes. On a 64 bit machine were you have an address range of 64 bits, a pointer will be 8 bytes.
Just for completeness and historic interest, in the 64bit world there were different platform conventions on the sizes of long and long long types, named LLP64 and LP64, mainly between Unix-type systems and Windows. An old standard named ILP64 also made int = 64-bit wide.
Microsoft maintained LLP64 where longlong = 64 bit wide, but long remained at 32, for easier porting.
Type ILP64 LP64 LLP64
char 8 8 8
short 16 16 16
int 64 32 32
long 64 64 32
long long 64 64 64
pointer 64 64 64
Source: https://stackoverflow.com/a/384672/48026