avoiding array index promotion in hopes of better performance - c++

I have a huge array of integers and those integers are not greater than 0xFFFF. Therefore I would like save some space and store them as unsigned short.
unsigned short addresses[50000 /* big number over here */];
Later I would use this array as follows
data[addresses[i]];
When I use only 2 bytes to store my integers, they are being promoted to either 4 or 8 bytes (depending on architecture) when used as array indices. Speed is very important to me, therefore should I rather store my integers as unsigned int to avoid wasting time on type promotion? This array may get absolutely massive and I would also like to save some space, but not at the cost of performance. What should I do?
EDIT: If I was to use address-type for my integers, which type should I use? size_t, maybe something else?

Anything to do with C style arrays usually gets compiled in to machine instructions that use the memory addressing of the architecture for which you compile, thus trying to save space on array indexes will not work.
If anything, you might break whatever optimizations your compiler might want to implement.
5 Million integer values, even on a 64bit machine, comes to about 40 MB RAM.
While I am sure your code does other things, this is not that much memory to sacrifice performance.
Since you chose to keep all those values in RAM in the first place, presumably for speed, don't ruin it.

Related

Should I store numbers in chars to save memory?

The question is pretty simple. Should I store some numbers which will not surpass 255 in a char or uint_8t variable types to save memory?
Is it common or even worth it saving a few bytes of memory?
Depends on your processor and the amount of memory your platform has.
For 16-bit, 32-bit and 64-bit processors, it doesn't make a lot of sense. On a 32-bit processor, it likes 32-bit quantities, so you are making it work a little harder. These processors are more efficient with 32-bit numbers (including registers).
Remember that you are trading memory space for processing time. Packing values and unpacking will cost more execution time than not packing.
Some embedded systems are space constrained, so restricting on size makes sense.
In present computing, reliability (a.k.a. robustness) and quality are high on the priority list. Memory efficiency and program efficiency have gone lower on the priority ladder. Development costs are also probably higher than worrying about memory savings.
It depends mostly on how many numbers you have to store. If you have billions of them, then yes, it makes sense to store them as compactly as possible. If you only have a few thousand, then no, it probably does not make sense. If you have millions, it is debatable.
As a rule of thumb, int is the "natural size" of integers on your platform. You many need a few extra bytes of code to read and write variables with other sizes. Still, if you got thousands of small values, a few extra bytes of code is worth the savings in data.
So, std::vector<unsigned char> vc; makes a lot more sense than just a single unsigned char c; value.
If it's a local variable, it mostly doesn't matter (from my experience different compilers with different optimization options do (usually negligibly) better with different type choices).
If it's saved elsewhere (static memory / heap) because you have many of those entities, then uint_least8_t is probably the best choice (the least and fast types are generally guaranteed to exists; the exact-width types generally are not).
unsigned char will reliably provide enough bits too (UCHAR_MAX is guaranteed to be at least 255 and since sizeof(unsigned char) is guaranteed to be 1 and sizeof(AnyType) >= 1, there can be no unsigned integer type that's smaller).

Is treating two uint8_ts as a uint16_t less efficient

Suppose I created a class that took a template parameter equal to the number uint8_ts I want to string together into a Big int.
This way I can create a huge int like this:
SizedInt<1000> unspeakablyLargeNumber; //A 1000 byte number
Now the question arises: am I killing my speed by using uint8_ts instead of using a larger built in type.
For example:
SizedInt<2> num1;
uint16_t num2;
Are num1 and num2 the same speed, or is num2 faster?
It would undoubtedly be slower to use uint8_t[2] instead of uint16_t.
Take addition, for example. In order to get the uint8_t[2] speed up to the speed of uint16_t, the compiler would have to figure out how to translate your add-with-carry logic and fuse those multiple instructions into a single, wider addition. I'm sure that some compilers out there are capable of such optimizations sometimes, but there are many circumstances which could make the optimization unlikely or impossible.
On some architectures, this will even apply to loading / storing, since uint8_t[2] usually has different alignment requirements than uint16_t.
Typical bignum libraries, like GMP, work on the largest words that are convenient for the architecture. On x64, this means using an array of uint64_t instead of an array of something smaller like uint8_t. Adding two 64-bit numbers is quite fast on modern microprocessors, in fact, it is usually the same speed as adding two 8-bit numbers, to say nothing of the data dependencies that are introduced by propagating carry bits through arrays of small numbers. These data dependencies mean that you will often only be add one element of your array per clock cycle, so you want those elements to be as large as possible. (At a hardware level, there are special tricks which allow carry bits to quickly move across the entire 64-bit operation, but these tricks are unavailable in software.)
If you desire, you can always use template specialization to choose the right sized primitives to make the most space-efficient bignums you want. Otherwise, using an array of uint64_t is much more typical.
If you have the choice, it is usually best to simply use GMP. Portions of GMP are written in assembly to make bignum operations much faster than they would be otherwise.
You may get better performance from larger types due to a decreased loop overhead. However, the tradeoff here is a better speed vs. less flexibility in choosing the size.
For example, if most of your numbers are, say, 5 bytes in length, switching to unit_16 would require an overhead of an extra byte. This means a memory overhead of 20%. On the other hand, if we are talking about really large numbers, say, 50 bytes or more, memory overhead would be much smaller - on the order of 2%, so getting an increase in speed would be achieved at a much smaller cost.

std::vector to hold much more than 2^32 elements

std::vector::size returns a size_t so I guess it can hold up to 2^32 elements.
Is there a standard container than can hold much more elements, e.g. 2^64 OR a way to tweak std::vector to be "indexed" by e.g. a unsigned long long?
Sure. Compile a 64-bit program. size_t will be 64 bits wide then.
But really, what you should be doing is take a step back, and consider why you need such a big vector. Because most likely, you don't, and there's a better way to solve whatever problem you're working on.
size_t doesn't have a predefined size, although it is often capped at 232 on 32 bit computers.
Since a std::vector must hold contiguous memory for all elements, you will run out of memory before exceeding the size.
Compile your program for a 64 bit computer and you'll have more space.
Better still, reconsider if std::vector is appropriate. Why do you want to hold trillions of adjacent objects directly in memory?
Consider a std::map<unsigned long long, YourData> if you only want large indexes and aren't really trying to store trillions of objects.

Casting size_t to allow more elements in a std::vector

I need to store a huge number of elements in a std::vector (more that the 2^32-1 allowed by unsigned int) in 32 bits. As far as I know this quantity is limited by the std::size_t unsigned int type. May I change this std::size_t by casting to an unsigned long? Would it resolve the problem?
If that's not possible, suppose I compile in 64 bits. Would that solve the problem without any modification?
size_t is a type that can hold size of any allocable chunk of memory. It follows that you can't allocate more memory than what fits in your size_t and thus can't store more elements in any way.
Compiling in 64-bits will allow it, but realize that the array still needs to fit in memory. 232 is 4 billion, so you are going to go over 4 * sizeof(element) GiB of memory. More than 8 GiB of RAM is still rare, so that does not look reasonable.
I suggest replacing the vector with the one from STXXL. It uses external storage, so your vector is not limited by amount of RAM. The library claims to handle terabytes of data easily.
(edit) Pedantic note: size_t needs to hold size of maximal single object, not necessarily size of all available memory. In segmented memory models it only needs to accommodate the offset when each object has to live in single segment, but with different segments more memory may be accessible. It is even possible to use it on x86 with PAE, the "long" memory model. However I've not seen anybody actually use it.
There are a number of things to say.
First, about the size of std::size_t on 32-bit systems and 64-bit systems, respectively. This is what the standard says about std::size_t (§18.2/6,7):
6 The type size_t is an implementation-defined unsigned integer type that is large enough to contain the size
in bytes of any object.
7 [ Note: It is recommended that implementations choose types for ptrdiff_t and size_t whose integer
conversion ranks (4.13) are no greater than that of signed long int unless a larger size is necessary to
contain all the possible values. — end note ]
From this it follows that std::size_t will be at least 32 bits in size on a 32-bit system, and at least 64 bits on a 64-bit system. It could be larger, but that would obviously not make any sense.
Second, about the idea of type casting: For this to work, even in theory, you would have to cast (or rather: redefine) the type inside the implementation of std::vector itself, wherever it occurs.
Third, when you say you need this super-large vector "in 32 bits", does that mean you want to use it on a 32-bit system? In that case, as the others have pointed out already, what you want is impossible, because a 32-bit system simply doesn't have that much memory.
But, fourth, if what you want is to run your program on a 64-bit machine, and use only a 32-bit data type to refer to the number of elements, but possibly a 64-bit type to refer to the total size in bytes, then std::size_t is not relevant because that is used to refer to the total number of elements, and the index of individual elements, but not the size in bytes.
Finally, if you are on a 64-bit system and want to use something of extreme proportions that works like a std::vector, that is certainly possible. Systems with 32 GB, 64 GB, or even 1 TB of main memory are perhaps not extremely common, but definitely available.
However, to implement such a data type, it is generally not a good idea to simply allocate gigabytes of memory in one contiguous block (which is what a std::vector does), because of reasons like the following:
Unless the total size of the vector is determined once and for all at initialization time, the vector will be resized, and quite likely re-allocated, possibly many times as you add elements. Re-allocating an extremely large vector can be a time-consuming operation. [ I have added this item as an edit to my original answer. ]
The OS will have difficulties providing such a large portion of unfragmented memory, as other processes running in parallel require memory, too. [Edit: As correctly pointed out in the comments, this isn't really an issue on any standard OS in use today.]
On very large servers you also have tens of CPUs and typically NUMA-type memory architectures, where it is clearly preferable to work with relatively smaller chunks of memory, and have multiple threads (possibly each running on a different core) access various chunks of the vector in parallel.
Conclusions
A) If you are on a 32-bit system and want to use a vector that large, using disk-based methods such as the one suggested by #JanHudec is the only thing that is feasible.
B) If you have access to a large 64-bit system with tens or hundreds of GB, you should look into an implementation that divides the entire memory area into chunks. Essentially something that works like a std::vector<std::vector<T>>, where each nested vector represents one chunk. If all chunks are full, you append a new chunk, etc. It is straight-forward to implement an iterator type for this, too. Of course, if you want to optimize this further to take advantage of multi-threading and NUMA features, it will get increasingly complex, but that is unavoidable.
A vector might be the wrong data structure for you. It requires storage in a single block of memory, which is limited by the size of size_t. This you can increase by compiling for 64 bit systems, but then you can't run on 32 bit systems which might be a requirement.
If you don't need vector's particular characteristics (particularly O(1) lookup and contiguous memory layout), another structure such as a std::list might suit you, which has no size limits except what the computer can physically handle as it's a linked list instead of a conveniently-wrapped array.

Declaring large character array in c++

I am trying right now to declare a large character array. I am using the character array as a bitmap (as in a map of booleans, not the image file type). The following code generates a compilation error.
//This is code before main. I want these as globals.
unsigned const long bitmap_size = (ULONG_MAX/(sizeof(char)));
char bitmap[bitmap_size];
The error is overflow in array dimension. I recognize that I'm trying to have my process consume a lot of data and that there might be some limit in place that prevents me from doing so. I am curious as to whether I am making a syntax error or if I need to request more resources from the kernel. Also, I have no interest in creating a bitmap with some class. Thank you for your time.
EDIT
ULONG_MAX is very much dependent upon the machine that you are using. On the particular machine I was compiling my code on it was well over 4.2 billion. All in all, I wouldn't to use that constant like a constant, at least for the purpose of memory allocation.
ULONG_MAX/sizeof(char) is the same as ULONG_MAX, which is a very large number. So large, in fact, that you don't have room for it even in virtual memory (because ULONG_MAX is probably the number of bytes in your entire virtual memory).
You definitely need to rethink what you are trying to do.
It's impossible to declare an array that large on most systems -- on a 32-bit system, that array is 4 GB, which doesn't fit into the available address space, and on most 64-bit systems, it's 16 exabytes (16 million terabytes), which doesn't fit into the available address space there either (and, incidentally, may be more memory than exists on the entire planet).
Use malloc() to allocate large amounts of memory. But be realistic. :)
As I understand it, the maximum size of an array in c++ is the largest integer the platform supports. It is likely that your long-type bitmap_size constant exceeds that limit.