Here're my codes:
#define MSK 0x0F
#define UNT 1
#define N 3000000000
unsigned char aln[1+N];
unsigned char pileup[1+N];
void set(unsigned long i)
{
if ((aln[i] & MSK) != MSK ) {
aln[i] += UNT;
}
}
int main(void) {}
When I try to compile it, the compiler complains like this:
tmp/ccJ4IgSa.o: In function `set':
bitmacs.c:(.text+0xf): relocation truncated to fit: R_X86_64_32S against symbol `aln' defined in COMMON \
section in /tmp/ccJ4IgSa.o
bitmacs.c:(.text+0x29): relocation truncated to fit: R_X86_64_32S against symbol `aln' defined in COMMON\
section in /tmp/ccJ4IgSa.o
bitmacs.c:(.text+0x32): relocation truncated to fit: R_X86_64_32S against symbol `aln' defined in COMMON\
section in /tmp/ccJ4IgSa.o
I think the reason may be the N is too big, because it can compile successfully if I change N to 2000000000. But I need 3000000000 as the value of N..
Anyone has idea about that?
Per your original question: use the integer literal suffix UL (or similar) to force the storage type of N:
#define N 3000000000UL
However, (per your comment on HLundvall's answer) the relocation truncated to fit error obviously isn't due to this - it may (as Mystical and Matt Lacey say) simply be too big to fit in the segment.
As an aside, if you ask a seperate question explaining what you're trying to accomplish with your huge arrays, someone may be able to suggest a better solution (that is more likely to fit in memory)
For example:
your sample code is only using the low nibble of each byte in the code shown: you could pack this into half the size (which is admittedly still much too large)
depending on your access patterns, you might be able to keep the array on disk and cache a working subset in memory
there may be better overall algorithms and data structures if we knew what you needed
Disregarding the "formal" problem that your numeric literal isn't of the correct type (see the other answers for the correct syntax), the key point here is that it's a very bad idea to allocate a 3 GB static/global array.
static and global1 variables on most platforms are mapped directly from the executable image, which means that your executable would have to be as big as 3 GB, which is quite big even for current day standards. Even if on some platforms this limitation may be lifted (see the comments), you don't have any control on how to handle the failure of allocation.
Most importantly, global variables are not intended for such big stuff, and you are likely to find problems with arbitrary limits imposed by the linker (such as the one you found) and the loader. Instead, you should allocate anything that's bigger than a few KBs on the heap, using malloc, new or some platform-specific function, handling gracefully the possible failure at runtime.
Still, keep in mind that for an application running under almost any 32 bit operating system it's not possible to get 3 GB of contiguous memory as you request, and it's impossible altogether to get more than one of these arrays (=more than 4 GB of contiguous memory) without resorting to platform-specific tricks (e.g. mapping only specific parts of the arrays in memory at a given moment).
Also, are you sure that you do need all that contiguous memory since your program starts to run? Isn't there some better data structure/algorithm that could avoid allocating all that memory?
In general, what the standard calls variables with static storage duration.
To enter a numeric constant of type unsigned long use:
#define N 3000000000UL
The problem is that gcc (by default) uses pc-relative accesses to get the address of static data objects on x86_64 targets, and those accesses are limited to 2^31 bytes maximum. So if the symbol ends up getting placed more than 2GB away from the code that accesses it, you'll end up getting this link error when it tries to use an offset that is too big to fit in the 32 bits of space allowed in the instruction.
You can avoid this problem by using the -mcmodel=large option to gcc. This tells it to not assume that it can use 32-bit PC relative offsets to access symbols (among other things)
Note that the type suffix of the constant literal is mostly irrelevant -- a constant literal that is too big for an int will automatically become a long (or even long long if needed) without any suffix. See 6.4.4.1.5 of the C99 spec.
Your executable is trying to put objects in memory past the 4GB mark, which is not allowed. See this link: http://www.technovelty.org/code/c/relocation-truncated.html.
From the article: "If you're seeing this and you're not hand-coding, you probably want to check out the -mmodel argument to gcc."
Related
For example:
#include <stdio.h>
int main (void) /* Why int and not short int? - Waste of Memory */
{
printf("Hello World!");
return 0;
}
Why main() is conventional defined with int type, which allocates 4 bytes in memory on 32-bit, if it usually returns only 0 or 1, while other types such as short int (2 bytes,32-bit) or even char (1 byte,32-bit) would be more memory saving?
It is wasting memory space.
NOTE: The question is not a duplicate of the thread given; its answers only correspond to the return value itself but not its datatype at explicit focus.
The Question is for C and C++. If the answers between those alter, share your wisdom with the mention of the context of which language in particular is focused.
Usually assemblers use their registers to return a value (for example the register AX in Intel processors). The type int corresponds to the machine word That is, it is not required to convert, for example, a byte that corresponds to the type char to the machine word.
And in fact, main can return any integer value.
It's because of a machine that's half a century old.
Back in the day when C was created, an int was a machine word on the PDP-11 - sixteen bits - and it was natural and efficient to have main return that.
The "machine word" was the only type in the B language, which Ritchie and Thompson had developed earlier, and which C grew out of.
When C added types, not specifying one gave you a machine word - an int.
(It was very important at the time to save space, so not requiring the most common type to be spelled out was a Very Good Thing.)
So, since a B program started with
main()
and programmers are generally language-conservative, C did the same and returned an int.
There are two reasons I would not consider this a waste:
1 practical use of 4 byte exit code
If you want to return an exit code that exactly describes an error you want more than 8 bit.
As an example you may want to group errors: the first byte could describe the vague type of error, the second byte could describe the function that caused the error, the third byte could give information about the cause of the error and the fourth byte describes additional debug information.
2 Padding
If you pass a single short or char they will still be aligned to fit into a machine word, which is often 4 Byte/32 bit depending on architecture. This is called padding and means, that you will most likely still need 32 bit of memory to return a single short or char.
The old-fashioned convention with most shells is to use the least significant 8 bits of int, not just 0 or 1. 16 bits is increasingly common due to that being the minimum size of an int allowed by the standard.
And what would the issue be with wasting space? Is the space really wasted? Is your computer so full of "stuff" that the remaining sizeof(int) * CHAR_BIT - 8 would make a difference? Could the architecture exploit that and use those remaining bits for something else? I very much doubt it.
So I wouldn't say the memory is at all wasted since you get it back from the operating system when the program finishes. Perhaps extravagent? A bit like using a large wine glass for a small tipple perhaps?
1st: Alone your assumption/statement if it usually returns only 0 or 1 is wrong.
Usually the return code is expected to be 0 if no error occurred but otherwise it can return any number to represent different errors. And most (at least command line programs) do so. Many programs also output negative numbers.
However there are a few common used codes https://www.tldp.org/LDP/abs/html/exitcodes.html also here another SO member points to a unix header that contains some codes https://stackoverflow.com/a/24121322/2331592
So after all it is not just a C or C++ type thing but also has historical reasons how most operating systems work and expect the programs to behave and since that the languages have to support that and so at least C like languages do that by using an int main(...).
2nd:
your conclusion It is wasting memory space is wrong.
Using an int in comparison to a shorter type does not involve any waste.
Memory is usually handled in word-size (that that mean may depend from your architecture) anyway
working with sub-word-types involves computation overheand on some architecture (read: load, word, mask out unrelated bits; store: load memory, mask out variable bits, or them with the new value, write the word back)
the memory is not wasted unless you use it. if you write return 0; no memory is ever used at this point. if you return myMemorySaving8bitVar; you only have 1 byte used (most probable on the stack (if not optimized out at all))
You're either working in or learning C, so I think it's a Real Good Idea that you are concerned with efficiency. However, it seems that there are a few things that seem to need clarifying here.
First, the int data type is not an never was intended to mean "32 bits". The idea was that int would be the most natural binary integer type on the target machine--usually the size of a register.
Second, the return value from main() is meant to accommodate a wide range of implementations on different operating systems. A POSIX system uses an unsigned 8-bit return code. Windows uses 32-bits that are interpreted by the CMD shell as 2's complement signed. Another OS might choose something else.
And finally, if you're worried about memory "waste", that's an implementation issue that isn't even an issue in this case. Return codes from main are typically returned in machine registers, not in memory, so there is no cost or savings involved. Even if there were, saving 2 bytes in the run of a nontrivial program is not worth any developer's time.
The answer is "because it usually doesn't return only 0 or 1." I found this thread from software engineering community that at least partially answers your question. Here are the two highlights, first from the accepted answer:
An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags (0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer.
An interesting point is also raised by Keith Thompson:
For example, in the dialect of C used in the Plan 9 operating system main is normally declared as a void function, but the exit status is returned to the calling environment by passing a string pointer to the exits() function. The empty string denotes success, and any non-empty string denotes some kind of failure. This could have been implemented by having main return a char* result.
Here's another interesting bit from a unix.com forum:
(Some of the following may be x86 specific.)
Returning to the original question: Where is the exit status stored? Inside the kernel.
When you call exit(n), the least significant 8 bits of the integer n are written to a cpu register. The kernel system call implementation will then copy it to a process-related data structure.
What if your code doesn't call exit()? The c runtime library responsible for invoking main() will call exit() (or some variant thereof) on your behalf. The return value of main(), which is passed to the c runtime in a register, is used as the argument to the exit() call.
Related to the last quote, here's another from cppreference.com
5) Execution of the return (or the implicit return upon reaching the end of main) is equivalent to first leaving the function normally (which destroys the objects with automatic storage duration) and then calling std::exit with the same argument as the argument of the return. (std::exit then destroys static objects and terminates the program)
Lastly, I found this really cool example here (although the author of the post is wrong in saying that the result returned is the returned value modulo 512). After compiling and executing the following:
int main() {
return 42001;
}
on a POSIX compliant my* system, echo $? returns 17. That is because 42001 % 256 == 17 which shows that 8 bits of data are actually used. With that in mind, choosing int ensures that enough storage is available for passing the program's exit status information, because, as per this answer, compliance to the C++ standard guarantees that size of int (in bits)
can't be less than 8. That's because it must be large enough to hold "the eight-bit code units of the Unicode UTF-8 encoding form."
EDIT:
*As Andrew Henle pointed out in the comment:
A fully POSIX compliant system makes the entire int return value available, not just 8 bits. See pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html: "If si_code is equal to CLD_EXITED, then si_status holds the exit value of the process; otherwise, it is equal to the signal that caused the process to change state. The exit value in si_status shall be equal to the full exit value (that is, the value passed to _exit(), _Exit(), or exit(), or returned from main()); it shall not be limited to the least significant eight bits of the value."
I think this makes for an even stronger argument for the use of int over data types of smaller sizes.
As far as I understand, the number of bytes used for int is system dependent. Usually, 2 or 4 bytes are used for int.
As per Microsoft's documentation, __int8, __int16, __int32 and __int64 are Microsoft Specific keywords. Furthermore, __int16 uses 16-bits (i.e. 2 bytes).
Question: What are advantage/disadvantage of using __int16 (or int16_t)? For example, if I am sure that the value of my integer variable will never need more than 16 bits then, will it be beneficial to declare the variable as __int16 var (or int16_t var)?
UPDATE: I see that several comments/answers suggest using int16_t instead of __int16, which is a good suggestion but not really an advantage/disadvantage of using __int16. Basically, my question is, what is the advantage/disadvantage of saving 2 bytes by using 16-bit version of an integer instead of int ?
Saving 2 bytes is almost never worth it. However, saving thousands of bytes is. If you have an large array containing integers, using a small integer type can save quite a lot of memory. This leads to faster code, because the less memory one uses the less cache misses one receives (cache misses are a major loss of performance).
TL;DR: this is beneficial to do in large arrays, but pointless for 1-off variables.
The second use of these is if for dealing with binary files and messages. If you are reading a binary file that uses 16-bit integers, well, it's pretty convenient if you can represent that type exactly in your code.
BTW, don't use microsoft's versions. Use the standard versions (std::int16_t)
It depends.
On x86, primitive types are generally aligned on their size. So 2-byte types would be aligned on a 2-byte boundary. This is useful when you have more than one of these short variables, because you will be saving 50% of space. That directly translates to better memory and cache utilization and thus theoretically, better performance.
On the other hand, doing arithmetic on shorter-than-int types usually involves widening conversion to int. So if you do a lot of arithmetic on these types, using int types might result in better performance (contrived example).
So if you care about performance of a critical section of code, profile it to find out for sure if using a certain data type is faster or slower.
A possible rule of thumb would be - if you're memory-bound (i.e. you have lots of variables and especially arrays), use as short a data types as possible. If not - don't worry about it and use int types.
If you for some reason just need a shorter integer type it's already have that in the language - called short - unless you know you need exactly 16 bits there's really no good reason not to just stick with the agnostic short and int types. The broad idea is that these types should align well the target architecture (for example see word ).
That being said, theres no need to use the platform specific type (__int16), you can just use the standard one:
int16_t
See https://en.cppreference.com/w/cpp/types/integer for more information and standard types
Even if you still insist on __int16 you probably want a typedef something ala.:
using my_short = __int16;
Update
Your main question is:
What is the advantage/disadvantage of
saving 2 bytes by using 16-bit version of an integer instead of int ?
If you have a lot of data (In the ballpark of at least some 100.000-1.000.000 elements as a rule of thumb) - then there could be an overall performance saving in terms of using less cpu-cache. Overall there's no disadvantage of using a smaller type - except for the obvious one - and possible conversions as explained in this answer
The main reason for using these types is to make sure about the size of your variable in different architectures and compilers. we call it "code reusability" and "portability"
in higher-level modern languages, all this will handle with compiler/interpreter/virtual machine/etc. that you don't need to worry about, but it has some performance and memory usage costs.
When you have some kind of limitation you may need to optimize everything. The best example is embedded systems that have a very limited size of memory and work at low frequency. In the other hand, there are lots of compilers out there with different implementations. Some of them interpret "int" as a "16bit" value and some as a "32bit".
for example, you receive and specific stream of values over a communication system, you want to save them in a buffer or array and you want to make sure the input data is always interpreted as a 16bit noting else.
I am programming C++ using gcc on an obscure system called linux x86-64. I was hoping that may be there are a few folks out there who have used this same, specific system (and might also be able to help me understand what is a valid pointer on this system). I do not care to access the location pointed to by the pointer, just want to calculate it via pointer arithmetic.
According to section 3.9.2 of the standard:
A valid value of an object pointer type represents either the address of a byte in memory (1.7) or a null pointer.
And according to [expr.add]/4:
When an expression that has integral type is added to or subtracted
from a pointer, the result has the type of the pointer operand. If the
expression P points to element x[i] of an array object x with n
elements, the expressions P + J and J + P (where J has the value j)
point to the (possibly-hypothetical) element x[i + j] if 0 ≤ i + j ≤
n; otherwise, the behavior is undefined. Likewise, the expression P -
J points to the (possibly-hypothetical) element x[i − j] if 0 ≤ i − j
≤ n; otherwise, the behavior is undefined.
And according to a stackoverflow question on valid C++ pointers in general:
Is 0x1 a valid memory address on your system? Well, for some embedded systems it is. For most OSes using virtual memory, the page beginning at zero is reserved as invalid.
Well, that makes it perfectly clear! So, besides NULL, a valid pointer is a byte in memory, no, wait, it's an array element including the element right after the array, no, wait, it's a virtual memory page, no, wait, it's Superman!
(I guess that by "Superman" here I mean "garbage collectors"... not that I read that anywhere, just smelled it. Seriously, though, all the best garbage collectors don't break in a serious way if you have bogus pointers lying around; at worst they just don't collect a few dead objects every now and then. Doesn't seem like anything worth messing up pointer arithmetic for.).
So, basically, a proper compiler would have to support all of the above flavors of valid pointers. I mean, a hypothetical compiler having the audacity to generate undefined behavior just because a pointer calculation is bad would be dodging at least the 3 bullets above, right? (OK, language lawyers, that one's yours).
Furthermore, many of these definitions are next to impossible for a compiler to know about. There are just so many ways of creating a valid memory byte (think lazy segfault trap microcode, sideband hints to a custom pagetable system that I'm about to access part of an array, ...), mapping a page, or simply creating an array.
Take, for example, a largish array I created myself, and a smallish array that I let the default memory manager create inside of that:
#include <iostream>
#include <inttypes.h>
#include <assert.h>
using namespace std;
extern const char largish[1000000000000000000L];
asm("largish = 0");
int main()
{
char* smallish = new char[1000000000];
cout << "largish base = " << (long)largish << "\n"
<< "largish length = " << sizeof(largish) << "\n"
<< "smallish base = " << (long)smallish << "\n";
}
Result:
largish base = 0
largish length = 1000000000000000000
smallish base = 23173885579280
(Don't ask how I knew that the default memory manager would allocate something inside of the other array. It's an obscure system setting. The point is I went through weeks of debugging torment to make this example work, just to prove to you that different allocation techniques can be oblivious to one another).
Given the number of ways of managing memory and combining program modules that are supported in linux x86-64, a C++ compiler really can't know about all of the arrays and various styles of page mappings.
Finally, why do I mention gcc specifically? Because it often seems to treat any pointer as a valid pointer... Take, for instance:
char* super_tricky_add_operation(char* a, long b) {return a + b;}
While after reading all the language specs you might expect the implementation of super_tricky_add_operation(a, b) to be rife with undefined behavior, it is in fact very boring, just an add or lea instruction. Which is so great, because I can use it for very convenient and practical things like non-zero-based arrays if nobody is putzing with my add instructions just to make a point about invalid pointers. I love gcc.
In summary, it seems that any C++ compiler supporting standard linkage tools on linux x86-64 would almost have to treat any pointer as a valid pointer, and gcc appears to be a member of that club. But I'm not quite 100% sure (given enough fractional precision, that is).
So... can anyone give a solid example of an invalid pointer in gcc linux x86-64? By solid I mean leading to undefined behavior. And explain what gives rise to the undefined behavior allowed by the language specs?
(or provide gcc documentation proving the contrary: that all pointers are valid).
Usually pointer math does exactly what you'd expect regardless of whether pointers are pointing at objects or not.
UB doesn't mean it has to fail. Only that it's allowed to make the whole rest of the program behave strangely in some way. UB doesn't mean that just the pointer-compare result can be "wrong", it means the entire behaviour of the whole program is undefined. This tends to happen with optimizations that depend on a violated assumption.
Interesting corner cases include an array at the very top of virtual address space: a pointer to one-past-the-end would wrap to zero, so start < end would be false?!? But pointer comparison doesn't have to handle that case, because the Linux kernel won't ever map the top page, so pointers into it can't be pointing into or just past objects. See Why can't I mmap(MAP_FIXED) the highest virtual page in a 32-bit Linux process on a 64-bit kernel?
Related:
GCC does have a max object size of PTRDIFF_MAX (which is a signed type). So for example, on 32-bit x86, an array larger than 2GB isn't fully supported for all cases of code-gen, although you can mmap one.
See my comment on What is the maximum size of an array in C? - this restriction lets gcc implement pointer subtraction (to get a size) without keeping the carry-out from the high bit, for types wider than char where the C subtraction result is in objects, not bytes, so in asm it's (a - b) / sizeof(T).
Don't ask how I knew that the default memory manager would allocate something inside of the other array. It's an obscure system setting. The point is I went through weeks of debugging torment to make this example work, just to prove to you that different allocation techniques can be oblivious to one another).
First of all, you never actually allocated the space for large[]. You used inline asm to make it start at address 0, but did nothing to actually get those pages mapped.
The kernel won't overlap existing mapped pages when new uses brk or mmap to get new memory from the kernel, so in fact static and dynamic allocation can't overlap.
Second, char[1000000000000000000L] ~= 2^59 bytes. Current x86-64 hardware and software only support canonical 48-bit virtual addresses (sign-extended to 64-bit). This will change with a future generation of Intel hardware which adds another level of page tables, taking us up to 48+9 = 57-bit addresses. (Still with the top half used by the kernel, and a big hole in the middle.)
Your unallocated space from 0 to ~2^59 covers all user-space virtual memory addresses that are possible on x86-64 Linux, so of course anything you allocate (including other static arrays) will be somewhere "inside" this fake array.
Removing the extern const from the declaration (so the array is actually allocated, https://godbolt.org/z/Hp2Exc) runs into the following problems:
//extern const
char largish[1000000000000000000L];
//asm("largish = 0");
/* rest of the code unchanged */
RIP-relative or 32-bit absolute (-fno-pie -no-pie) addressing can't reach static data that gets linked after large[] in the BSS, with the default code model (-mcmodel=small where all static code+data is assumed to fit in 2GB)
$ g++ -O2 large.cpp
/usr/bin/ld: /tmp/cc876exP.o: in function `_GLOBAL__sub_I_largish':
large.cpp:(.text.startup+0xd7): relocation truncated to fit: R_X86_64_PC32 against `.bss'
/usr/bin/ld: large.cpp:(.text.startup+0xf5): relocation truncated to fit: R_X86_64_PC32 against `.bss'
collect2: error: ld returned 1 exit status
compiling with -mcmodel=medium places large[] in a large-data section where it doesn't interfere with addressing other static data, but it itself is addressed using 64-bit absolute addressing. (Or -mcmodel=large does that for all static code/data, so every call is indirect movabs reg,imm64 / call reg instead of call rel32.)
That lets us compile and link, but then the executable won't run because the kernel knows that only 48-bit virtual addresses are supported and won't map the program in its ELF loader before running it, or for PIE before running ld.so on it.
peter#volta:/tmp$ g++ -fno-pie -no-pie -mcmodel=medium -O2 large.cpp
peter#volta:/tmp$ strace ./a.out
execve("./a.out", ["./a.out"], 0x7ffd788a4b60 /* 52 vars */) = -1 EINVAL (Invalid argument)
+++ killed by SIGSEGV +++
Segmentation fault (core dumped)
peter#volta:/tmp$ g++ -mcmodel=medium -O2 large.cpp
peter#volta:/tmp$ strace ./a.out
execve("./a.out", ["./a.out"], 0x7ffdd3bbad00 /* 52 vars */) = -1 ENOMEM (Cannot allocate memory)
+++ killed by SIGSEGV +++
Segmentation fault (core dumped)
(Interesting that we get different error codes for PIE vs non-PIE executables, but still before execve() even completes.)
Tricking the compiler + linker + runtime with asm("largish = 0"); is not very interesting, and creates obvious undefined behaviour.
Fun fact #2: x64 MSVC doesn't support static objects larger than 2^31-1 bytes. IDK if it has a -mcmodel=medium equivalent. Basically GCC fails to warn about objects too large for the selected memory model.
<source>(7): error C2148: total size of array must not exceed 0x7fffffff bytes
<source>(13): warning C4311: 'type cast': pointer truncation from 'char *' to 'long'
<source>(14): error C2070: 'char [-1486618624]': illegal sizeof operand
<source>(15): warning C4311: 'type cast': pointer truncation from 'char *' to 'long'
Also, it points out that long is the wrong type for pointers in general (because Windows x64 is an LLP64 ABI, where long is 32 bits). You want intptr_t or uintptr_t, or something equivalent to printf("%p") that prints a raw void*.
The Standard does not anticipate the existence of any storage beyond that which the implementation provides via objects of static, automatic, or thread duration, or the use of standard-library functions like calloc. It consequently imposes no restrictions on how implementations process pointers to such storage, since from its perspective such storage doesn't exist, pointers that meaningfully identify non-existent storage don't exist, and things that don't exist don't need to have rules written about them.
That doesn't mean that the people on the Committee weren't well aware that many execution environments provided forms of storage that C implementations might know nothing about. The expected, however, that people who actually worked with various platforms would be better placed than the Committee to determine what kinds of things programmers would need to do with such "outside" addresses, and how to best support such needs. No need for the Standard to concern itself with such things.
As it happens, there are some execution environments where it is more convenient for a compiler to treat pointers arithmetic like integer math than to do anything else, and many compilers for such platforms treat pointer arithmetic usefully even in cases where they're not required to do so. For 32-bit and 64-bit x86 and x64, I don't think there are any bit patterns for invalid non-null addresses, but it may be possible to form pointers that don't behave as valid pointers to the objects they address.
For example, given something like:
char x=1,y=2;
ptrdiff_t delta = (uintptr_t)&y - (uintptr_t)&x;
char *p = &x+delta;
*p = 3;
even if pointer representation is defined in such a way that using integer arithmetic to add delta to the address of x would yield y, that would in no way guarantee that a compiler would recognize that operations on *p might affect y, even if p holds y's address. Pointer p would effectively behave as though its address was invalid even though the bit pattern would match that of y's address.
The following examples show that GCC specifically assumes at least the following:
A global array cannot be at address 0.
An array cannot wrap around address 0.
Examples of unexpected behavior arising from arithmetic on invalid pointers in gcc linux x86-64 C++ (thank you melpomene):
largish == NULL evaluates to false in the program in the question.
unsigned n = ...; if (ptr + n < ptr) { /*overflow */ } can be optimized to if (false).
int arr[123]; int n = ...; if (arr + n < arr || arr + n > arr + 123) can be optimized to if (false).
Note that these examples all involve comparison of the invalid pointers, and therefore may not affect the practical case of non-zero-based arrays. Therefore I have opened a new question of a more practical nature.
Thank you everyone in the chat for helping to narrow down the question.
std::vector::size() returns a size_type which is unsigned and usually the same as size_t, e.g. it is 8 bytes on 64bit platforms.
In constrast, QVector::size() returns an int which is usually 4 bytes even on 64bit platforms, and at that it is signed, which means it can only go half way to 2^32.
Why is that? This seems quite illogical and also technically limiting, and while it is nor very likely that you may ever need more than 2^32 number of elements, the usage of signed int cuts that range in half for no apparent good reason. Perhaps to avoid compiler warnings for people too lazy to declare i as a uint rather than an int who decided that making all containers return a size type that makes no sense is a better solution? The reason could not possibly be that dumb?
This has been discussed several times since Qt 3 at least and the QtCore maintainer expressed that a while ago no change would happen until Qt 7 if it ever does.
When the discussion was going on back then, I thought that someone would bring it up on Stack Overflow sooner or later... and probably on several other forums and Q/A, too. Let us try to demystify the situation.
In general you need to understand that there is no better or worse here as QVector is not a replacement for std::vector. The latter does not do any Copy-On-Write (COW) and that comes with a price. It is meant for a different use case, basically. It is mostly used inside Qt applications and the framework itself, initially for QWidgets in the early times.
size_t has its own issue, too, after all that I will indicate below.
Without me interpreting the maintainer to you, I will just quote Thiago directly to carry the message of the official stance on:
For two reasons:
1) it's signed because we need negative values in several places in the API:
indexOf() returns -1 to indicate a value not found; many of the "from"
parameters can take negative values to indicate counting from the end. So even
if we used 64-bit integers, we'd need the signed version of it. That's the
POSIX ssize_t or the Qt qintptr.
This also avoids sign-change warnings when you implicitly convert unsigneds to
signed:
-1 + size_t_variable => warning
size_t_variable - 1 => no warning
2) it's simply "int" to avoid conversion warnings or ugly code related to the
use of integers larger than int.
io/qfilesystemiterator_unix.cpp
size_t maxPathName = ::pathconf(nativePath.constData(), _PC_NAME_MAX);
if (maxPathName == size_t(-1))
io/qfsfileengine.cpp
if (len < 0 || len != qint64(size_t(len))) {
io/qiodevice.cpp
qint64 QIODevice::bytesToWrite() const
{
return qint64(0);
}
return readSoFar ? readSoFar : qint64(-1);
That was one email from Thiago and then there is another where you can find some detailed answer:
Even today, software that has a core memory of more than 4 GB (or even 2 GB)
is an exception, rather than the rule. Please be careful when looking at the
memory sizes of some process tools, since they do not represent actual memory
usage.
In any case, we're talking here about having one single container addressing
more than 2 GB of memory. Because of the implicitly shared & copy-on-write
nature of the Qt containers, that will probably be highly inefficient. You need
to be very careful when writing such code to avoid triggering COW and thus
doubling or worse your memory usage. Also, the Qt containers do not handle OOM
situations, so if you're anywhere close to your memory limit, Qt containers
are the wrong tool to use.
The largest process I have on my system is qtcreator and it's also the only
one that crosses the 4 GB mark in VSZ (4791 MB). You could argue that it is an
indication that 64-bit containers are required, but you'd be wrong:
Qt Creator does not have any container requiring 64-bit sizes, it simply
needs 64-bit pointers
It is not using 4 GB of memory. That's just VSZ (mapped memory). The total
RAM currently accessible to Creator is merely 348.7 MB.
And it is using more than 4 GB of virtual space because it is a 64-bit
application. The cause-and-effect relationship is the opposite of what you'd
expect. As a proof of this, I checked how much virtual space is consumed by
padding: 800 MB. A 32-bit application would never do that, that's 19.5% of the
addressable space on 4 GB.
(padding is virtual space allocated but not backed by anything; it's only
there so that something else doesn't get mapped to those pages)
Going into this topic even further with Thiago's responses, see this:
Personally, I'm VERY happy that Qt collection sizes are signed. It seems
nuts to me that an integer value potentially used in an expression using
subtraction be unsigned (e.g. size_t).
An integer being unsigned doesn't guarantee that an expression involving
that integer will never be negative. It only guarantees that the result
will be an absolute disaster.
On the other hand, the C and C++ standards define the behaviour of unsigned
overflows and underflows.
Signed integers do not overflow or underflow. I mean, they do because the types
and CPU registers have a limited number of bits, but the standards say they
don't. That means the compiler will always optimise assuming you don't over-
or underflow them.
Example:
for (int i = 1; i >= 1; ++i)
This is optimised to an infinite loop because signed integers do not overflow.
If you change it to unsigned, then the compiler knows that it might overflow
and come back to zero.
Some people didn't like that: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30475
unsigned numbers are values mod 2^n for some n.
Signed numbers are bounded integers.
Using unsigned values as approximations for 'positive integers' runs into the problem that common values are near the edge of the domain where unsigned values behave differently than plain integers.
The advantage is that unsigned approximation reaches higher positive integers, and under/overflow are well defined (if random when looked at as a model of Z).
But really, ptrdiff_t would be better than int.
A c++ specific question. So i read a question about what makes a program 32 bit/64 bit, and the anwser it got was something like this (sorry i cant find the question, was somedays ago i looked at it and i cant find it again:( ): As long as you dont make any "pointer assumptions", you only need to recompile it. So my question is, what are pointer assumtions ? To my understanding there is 32 bit pointer and 64 bit pointers so i figure it is something to do with that . Please show the diffrence in code between them. Any other good habits to keep in mind while writing code, that helps it making it easy to convert between the to are also welcome :) tho please share examples with them
Ps. I know there is this post:
How do you write code that is both 32 bit and 64 bit compatible?
but i tougth it was kind of to generall with no good examples, for new programmers like myself. Like what is a 32 bit storage unit ect. Kinda hopping to break it down a bit more (no pun intended ^^ ) ds.
In general it means that your program behavior should never depend on the sizeof() of any types (that are not made to be of some exact size), neither explicitly nor implicitly (this includes possible struct alignments as well).
Pointers are just a subset of them, and it probably also means that you should not try to rely on being able to convert between unrelated pointer types and/or integers, unless they are specifically made for this (e.g. intptr_t).
In the same way you need to take care of things written to disk, where you should also never rely on the size of e.g. built in types, being the same everywhere.
Whenever you have to (because of e.g. external data formats) use explicitly sized types like uint32_t.
For a well-formed program (that is, a program written according to syntax and semantic rules of C++ with no undefined behaviour), the C++ standard guarantees that your program will have one of a set of observable behaviours. The observable behaviours vary due to unspecified behaviour (including implementation-defined behaviour) within your program. If you avoid unspecified behaviour or resolve it, your program will be guaranteed to have a specific and certain output. If you write your program in this way, you will witness no differences between your program on a 32-bit or 64-bit machine.
A simple (forced) example of a program that will have different possible outputs is as follows:
int main()
{
std::cout << sizeof(void*) << std::endl;
return 0;
}
This program will likely have different output on 32- and 64-bit machines (but not necessarily). The result of sizeof(void*) is implementation-defined. However, it is certainly possible to have a program that contains implementation-defined behaviour but is resolved to be well-defined:
int main()
{
int size = sizeof(void*);
if (size != 4) {
size = 4;
}
std::cout << size << std::endl;
return 0;
}
This program will always print out 4, despite the fact it uses implementation-defined behaviour. This is a silly example because we could have just done int size = 4;, but there are cases when this does appear in writing platform-independent code.
So the rule for writing portable code is: aim to avoid or resolve unspecified behaviour.
Here are some tips for avoiding unspecified behaviour:
Do not assume anything about the size of the fundamental types beyond that which the C++ standard specifies. That is, a char is at least 8 bit, both short and int are at least 16 bits, and so on.
Don't try to do pointer magic (casting between pointer types or storing pointers in integral types).
Don't use a unsigned char* to read the value representation of a non-char object (for serialisation or related tasks).
Avoid reinterpret_cast.
Be careful when performing operations that may over or underflow. Think carefully when doing bit-shift operations.
Be careful when doing arithmetic on pointer types.
Don't use void*.
There are many more occurrences of unspecified or undefined behaviour in the standard. It's well worth looking them up. There are some great articles online that cover some of the more common differences that you'll experience between 32- and 64-bit platforms.
"Pointer assumptions" is when you write code that relies on pointers fitting in other data types, e.g. int copy_of_pointer = ptr; - if int is a 32-bit type, then this code will break on 64-bit machines, because only part of the pointer will be stored.
So long as pointers are only stored in pointer types, it should be no problem at all.
Typically, pointers are the size of the "machine word", so on a 32-bit architecture, 32 bits, and on a 64-bit architecture, all pointers are 64-bit. However, there are SOME architectures where this is not true. I have never worked on such machines myself [other than x86 with it's "far" and "near" pointers - but lets ignore that for now].
Most compilers will tell you when you convert pointers to integers that the pointer doesn't fit into, so if you enable warnings, MOST of the problems will become apparent - fix the warnings, and chances are pretty decent that your code will work straight away.
There will be no difference between 32bit code and 64bit code, the goal of C/C++ and other programming languages are their portability, instead of the assembly language.
The only difference will be the distrib you'll compile your code on, all the work is automatically done by your compiler/linker, so just don't think about that.
But: if you are programming on a 64bit distrib, and you need to use an external library for example SDL, the external library will have to also be compiled in 64bit if you want your code to compile.
One thing to know is that your ELF file will be bigger on a 64bit distrib than on a 32bit one, it's just logic.
What's the point with pointer? when you increment/change a pointer, the compiler will increment your pointer from the size of the pointing type.
The contained type size is defined by your processor's register size/the distrib your working on.
But you just don't have to care about this, the compilation will do everything for you.
Sum: That's why you can't execute a 64bit ELF file on a 32bit distrib.
Typical pitfalls for 32bit/64bit porting are:
The implicit assumption by the programmer that sizeof(void*) == 4 * sizeof(char).
If you're making this assumption and e.g. allocate arrays that way ("I need 20 pointers so I allocate 80 bytes"), your code breaks on 64bit because it'll cause buffer overruns.
The "kitten-killer" , int x = (int)&something; (and the reverse, void* ptr = (void*)some_int). Again an assumption of sizeof(int) == sizeof(void*). This doesn't cause overflows but looses data - the higher 32bit of the pointer, namely.
Both of these issues are of a class called type aliasing (assuming identity / interchangability / equivalence on a binary representation level between two types), and such assumptions are common; like on UN*X, assuming time_t, size_t, off_t being int, or on Windows, HANDLE, void* and long being interchangeable, etc...
Assumptions about data structure / stack space usage (See 5. below as well). In C/C++ code, local variables are allocated on the stack, and the space used there is different between 32bit and 64bit mode due to the point below, and due to the different rules for passing arguments (32bit x86 usually on the stack, 64bit x86 in part in registers). Code that just about gets away with the default stacksize on 32bit might cause stack overflow crashes on 64bit.
This is relatively easy to spot as a cause of the crash but depending on the configurability of the application possibly hard to fix.
Timing differences between 32bit and 64bit code (due to different code sizes / cache footprints, or different memory access characteristics / patterns, or different calling conventions ) might break "calibrations". Say, for (int i = 0; i < 1000000; ++i) sleep(0); is likely going to have different timings for 32bit and 64bit ...
Finally, the ABI (Application Binary Interface). There's usually bigger differences between 64bit and 32bit environments than the size of pointers...
Currently, two main "branches" of 64bit environments exist, IL32P64 (what Win64 uses - int and long are int32_t, only uintptr_t/void* is uint64_t, talking in terms of the sized integers from ) and LP64 (what UN*X uses - int is int32_t, long is int64_t and uintptr_t/void* is uint64_t), but there's the "subdivisions" of different alignment rules as well - some environments assume long, float or double align at their respective sizes, while others assume they align at multiples of four bytes. In 32bit Linux, they align all at four bytes, while in 64bit Linux, float aligns at four, long and double at eight-byte multiples.
The consequence of these rules is that in many cases, bith sizeof(struct { ...}) and the offset of structure/class members are different between 32bit and 64bit environments even if the data type declaration is completely identical.
Beyond impacting array/vector allocations, these issues also affect data in/output e.g. through files - if a 32bit app writes e.g. struct { char a; int b; char c, long d; double e } to a file that the same app recompiled for 64bit reads in, the result will not be quite what's hoped for.
The examples just given are only about language primitives (char, int, long etc.) but of course affect all sorts of platform-dependent / runtime library data types, whether size_t, off_t, time_t, HANDLE, essentially any nontrivial struct/union/class ... - so the space for error here is large,
And then there's the lower-level differences, which come into play e.g. for hand-optimized assembly (SSE/SSE2/...); 32bit and 64bit have different (numbers of) registers, different argument passing rules; all of this affects strongly how such optimizations perform and it's very likely that e.g. SSE2 code which gives best performance in 32bit mode will need to be rewritten / needs to be enhanced to give best performance 64bit mode.
There's also code design constraints which are very different for 32bit and 64bit, particularly around memory allocation / management; an application that's been carefully coded to "maximize the hell out of the mem it can get in 32bit" will have complex logic on how / when to allocate/free memory, memory-mapped file usage, internal caching, etc - much of which will be detrimental in 64bit where you could "simply" take advantage of the huge available address space. Such an app might recompile for 64bit just fine, but perform worse there than some "ancient simple deprecated version" which didn't have all the maximize-32bit peephole optimizations.
So, ultimately, it's also about enhancements / gains, and that's where more work, partly in programming, partly in design/requirements comes in. Even if your app cleanly recompiles both on 32bit and 64bit environments and is verified on both, is it actually benefitting from 64bit ? Are there changes that can/should be done to the code logic to make it do more / run faster in 64bit ? Can you do those changes without breaking 32bit backward compatibility ? Without negative impacts on the 32bit target ? Where will the enhancements be, and how much can you gain ?
For a large commercial project, answers to these questions are often important markers on the roadmap because your starting point is some existing "money maker"...