Right now I have a vector std::vector<char> myVector(4) containing any combination of a set of char lets say {#,#,O,*,%,$,!} may be more or less but not many more than that, might not always be 4 members either, but will be constant for any instance one instance.
now I stuck trying to create a data structure that can use an indefinite number of those combination as an index, to another vector.
in pseudo-code I am trying to accomplish:
SomeDataStructure['*']['#']['#']['O'] = someData
(someData is going to be a small class, but that shouldn't matter)
This is an operation critical piece that needs to run quickly, and will be run very often.
some approached i've tried to reason with were:
a 4 dimensional array, but I can access those without numeric indices. Maybe some form of enumeration could solve this. Edit: would maps be a way to do this?
edit:
I resolved this using a map:
std::map<std::vector<char>, someData> myMap;
Since the number of possible characters is limited to 8, you can use an enumeration instead. You'd therefore only need 3-bits to represent each "character". You can pack several of these 3-bit "characters" into a short integer using bitfields. The resulting packed integer becomes the index into your vector<SomeData>.
The space occupied by this vector would be space_of_SomeData * 2^(3*number_of_spaces). If, for example, number_of_spaces is 4, this results in 4096*space_of_SomeData. This might result in some wasted memory space, but lookups and insertions should be very fast.
Here's some sample code:
#include <vector>
enum CharSet
{
ampersand,
pound,
letterOh,
percent,
dollar,
exclamation
};
struct CompositeIndex
{
union
{
struct // Bitfield
{
unsigned c0 : 3; // 3 bits
unsigned c1 : 3; // 3 bits
unsigned c2 : 3; // 3 bits
unsigned c3 : 3; // 3 bits
} chars;
unsigned int index;
};
};
unsigned int lookup(CharSet c0, CharSet c1, CharSet c2, CharSet c3)
{
CompositeIndex ci;
ci.chars.c0 = c0;
ci.chars.c1 = c1;
ci.chars.c2 = c2;
ci.chars.c3 = c3;
return ci.index;
}
typedef int SomeClass;
int main(int argc, char* argv[])
{
std::vector<SomeClass> vec(100);
vec[lookup(ampersand, percent, dollar, pound)] = 42;
}
If you must absolutely work with char characters, you can easily create a 256-element lookup table that quickly converts 'char' characters into CharSet values.
As already discussed by others, you can use a std::map<std::string, SomeData> or even (the possibly faster) std::map<char[4], SomeData, Comparitor>. If the approximate frequency distribution of different character sequences is known, try inserting the most frequent patterns first in the map. Depending on the internal implementation of the map, this may speed up lookups for the most frequent patterns (they are near the top of the underlying binary search tree).
Does the order of the characters itself have any bearing on what someData might be? If not (and I suspect that to be the case), then it sounds like what you really want is a hashtable matching strings to a small class. Hash functions are quick (O(1)) operations, so performance ought not to be a problem.
Take a look at map class - it should meet your needs.
In C++, a char is a number (typically an 8-bit number). As such, you can theoretically do a 4-D array with those as the indices. The obvious problem with doing that will be that with a total of 4 bytes for indexing, your array ends up with 232 entries. If, for example, that someData occupies 32 bits, the array would occupy around 16 gigabytes (of which, apparently, only a minuscule percentage would really be used).
The obvious alternative would be to concatenate the individual characters together into a string, and use that as the key for a map:
std::map<std::string, SomeData_t> mymap;
mymap["*##O"] = someData;
Depending on how often you insert versus look up items, you could consider using an unordered_map instead. This typically gives slightly faster lookup in exchange for slightly slower insertion.
Related
I wasn't able to find how gmpxx stores the mpz_t structs under the hood. Thus the only way to get the most significant byte of a number stored as mpz_t is using the mpz_get_str method, but I would expect it to be very slow.
Do you know of a more effective (and simple) way of doing this?
I mean the 'most significant byte' of the number (which is in my case saved as mpz_t) in binary. I.e. for 12345 (10) = 11000000111001 (2) it would be 11000000, no matter gmpxx actually stores it.
Two functions to look at here:
size_t mpz_sizeinbase(mpz_t op, int base): this returns the length in a base, and for base=2, it gives the number of bits.
void mpz_tdiv_r_2exp (mpz_t r, const mpz_t n, mp_bitcnt_t b): this is equivalent to r = n >> b;.
Combined, the operation you are looking for is to bit-shift right exactly sizeinbase-8times:
size_t bit_length = mpz_sizeinbase(number, 2);
mpz_tdiv_r_2exp(last_byte, number, bit_length-8);
As a sidenote, the mpz_t struct is stored in "limbs", which are primitives that are chained together. These limbs can have leading 0's to make editing the number easier for small value changes - so accessing them directly is not recommended.
A limb means the part of a multi-precision number that fits in a single machine word. (We chose this word because a limb of the human body is analogous to a digit, only larger, and containing several digits.) Normally a limb is 32 or 64 bits. The C data type for a limb is mp_limb_t.
~https://gmplib.org/manual/Nomenclature-and-Types.html#Nomenclature-and-Types
You can create a union without padding (see #pragma pack effect) with 2 members: your struct and a byte, then assign a value to your struct, then read the value of the byte. However, I am not sure if it fits to your definition of MSB.
I have a program that takes in a file that is just a list of sets, each with their own integer identifier, and then sorts the sets by the number of members.
File format (the number after R is the ID):
R1
0123
0000
R2
0321
R3
0002
...
struct CodeBook {
string Residue; //stores the R lines
vector<string> CodeWords; //stores the lines between the R lines
};
vector<CodeBook> DATA;
With each file I run, the number of sets gets larger, and currently I am just storing everything into the huge vector DATA. My latest file is large enough that I've taken over the server's memory and flowing over into swap. This will be the last file I process before I possibly switch to a more RAM-friendly algorithm. With that file, the number of sets is larger than an unsigned 32 byte int.
I can calc how many there will be, and the number of sets is important for calculation purposes, so overflow is not an option. Going all the way up to a unsigned long long int is not an option either, because I've already pretty much maxed out memory usage.
How could I implement a variable length integer to more efficiently store everything so I can more efficiently calc everything?
Ex: small id ints get 1 or 2 bytes and the largest ints get 5 bytes
PS: Given the size of what I'm working with, speed is also a factor if it can be helped, but it's not the most important concern :/
Store everything in two huge vectors.
Define two structs with associated vectors:
struct u { int id; unsigned count };
struct ull { int id, unsigned long long count };
std::vector<u> u_vector;
std::vector<ull> ull_vector;
Read the count into an unsigned long long. If the count fits in an unsigned, store it in u_vector, otherwise in ull_vector. Sort both vectors; output u_vector first and ull_vector second.
Don't be tempted to try doing the same thing with unsigned char - the structure will be the same size as u (because of padding to make the id aligned).
If, say, a 32-bit integer is overflowing, instead of upgrading int to long, can we make use of some 40-bit type if we need a range only within 240, so that we save 24 (64-40) bits for every integer?
If so, how?
I have to deal with billions and space is a bigger constraint.
Yes, but...
It is certainly possible, but it is usually nonsensical (for any program that doesn't use billions of these numbers):
#include <stdint.h> // don't want to rely on something like long long
struct bad_idea
{
uint64_t var : 40;
};
Here, var will indeed have a width of 40 bits at the expense of much less efficient code generated (it turns out that "much" is very much wrong -- the measured overhead is a mere 1-2%, see timings below), and usually to no avail. Unless you have need for another 24-bit value (or an 8 and 16 bit value) which you wish to pack into the same structure, alignment will forfeit anything that you may gain.
In any case, unless you have billions of these, the effective difference in memory consumption will not be noticeable (but the extra code needed to manage the bit field will be noticeable!).
Note:
The question has in the mean time been updated to reflect that indeed billions of numbers are needed, so this may be a viable thing to do, presumed that you take measures not to lose the gains due to structure alignment and padding, i.e. either by storing something else in the remaining 24 bits or by storing your 40-bit values in structures of 8 each or multiples thereof).
Saving three bytes a billion times is worthwhile as it will require noticeably fewer memory pages and thus cause fewer cache and TLB misses, and above all page faults (a single page fault weighting tens of millions instructions).
While the above snippet does not make use of the remaining 24 bits (it merely demonstrates the "use 40 bits" part), something akin to the following will be necessary to really make the approach useful in a sense of preserving memory -- presumed that you indeed have other "useful" data to put in the holes:
struct using_gaps
{
uint64_t var : 40;
uint64_t useful_uint16 : 16;
uint64_t char_or_bool : 8;
};
Structure size and alignment will be equal to a 64 bit integer, so nothing is wasted if you make e.g. an array of a billion such structures (even without using compiler-specific extensions). If you don't have use for an 8-bit value, you could also use an 48-bit and a 16-bit value (giving a bigger overflow margin).
Alternatively you could, at the expense of usability, put 8 40-bit values into a structure (least common multiple of 40 and 64 being 320 = 8*40). Of course then your code which accesses elements in the array of structures will become much more complicated (though one could probably implement an operator[] that restores the linear array functionality and hides the structure complexity).
Update:
Wrote a quick test suite, just to see what overhead the bitfields (and operator overloading with bitfield refs) would have. Posted code (due to length) at gcc.godbolt.org, test output from my Win7-64 machine is:
Running test for array size = 1048576
what alloc seq(w) seq(r) rand(w) rand(r) free
-----------------------------------------------------------
uint32_t 0 2 1 35 35 1
uint64_t 0 3 3 35 35 1
bad40_t 0 5 3 35 35 1
packed40_t 0 7 4 48 49 1
Running test for array size = 16777216
what alloc seq(w) seq(r) rand(w) rand(r) free
-----------------------------------------------------------
uint32_t 0 38 14 560 555 8
uint64_t 0 81 22 565 554 17
bad40_t 0 85 25 565 561 16
packed40_t 0 151 75 765 774 16
Running test for array size = 134217728
what alloc seq(w) seq(r) rand(w) rand(r) free
-----------------------------------------------------------
uint32_t 0 312 100 4480 4441 65
uint64_t 0 648 172 4482 4490 130
bad40_t 0 682 193 4573 4492 130
packed40_t 0 1164 552 6181 6176 130
What one can see is that the extra overhead of bitfields is neglegible, but the operator overloading with bitfield reference as a convenience thing is rather drastic (about 3x increase) when accessing data linearly in a cache-friendly manner. On the other hand, on random access it barely even matters.
These timings suggest that simply using 64-bit integers would be better since they are still faster overall than bitfields (despite touching more memory), but of course they do not take into account the cost of page faults with much bigger datasets. It might look very different once you run out of physical RAM (I didn't test that).
You can quite effectively pack 4*40bits integers into a 160-bit struct like this:
struct Val4 {
char hi[4];
unsigned int low[4];
}
long getLong( const Val4 &pack, int ix ) {
int hi= pack.hi[ix]; // preserve sign into 32 bit
return long( (((unsigned long)hi) << 32) + (unsigned long)pack.low[i]);
}
void setLong( Val4 &pack, int ix, long val ) {
pack.low[ix]= (unsigned)val;
pack.hi[ix]= (char)(val>>32);
}
These again can be used like this:
Val4[SIZE] vals;
long getLong( int ix ) {
return getLong( vals[ix>>2], ix&0x3 )
}
void setLong( int ix, long val ) {
setLong( vals[ix>>2], ix&0x3, val )
}
You might want to consider Variable-Lenght Encoding (VLE)
Presumably, you have store a lot of those numbers somewhere (in RAM, on disk, send them over the network, etc), and then take them one by one and do some processing.
One approach would be to encode them using VLE.
From Google's protobuf documentation (CreativeCommons licence)
Varints are a method of serializing integers using
one or more bytes. Smaller numbers take a smaller number of bytes.
Each byte in a varint, except the last byte, has the most significant
bit (msb) set – this indicates that there are further bytes to come.
The lower 7 bits of each byte are used to store the two's complement
representation of the number in groups of 7 bits, least significant
group first.
So, for example, here is the number 1 – it's a single byte, so the msb
is not set:
0000 0001
And here is 300 – this is a bit more complicated:
1010 1100 0000 0010
How do you figure out that this is 300? First you drop the msb from
each byte, as this is just there to tell us whether we've reached the
end of the number (as you can see, it's set in the first byte as there
is more than one byte in the varint)
Pros
If you have lots of small numbers, you'll probably use less than 40 bytes per integer, in average. Possibly much less.
You are able to store bigger numbers (with more than 40 bits) in the future, without having to pay a penalty for the small ones
Cons
You pay an extra bit for each 7 significant bits of your numbers. That means a number with 40 significant bits will need 6 bytes. If most of your numbers have 40 significant bits, you are better of with a bit field approach.
You will lose the ability to easily jump to a number given its index (you have to at least partially parse all previous elements in an array in order to access the current one.
You will need some form of decoding before doing anything useful with the numbers (although that is true for other approaches as well, like bit fields)
(Edit: First of all - what you want is possible, and makes sense in some cases; I have had to do similar things when I tried to do something for the Netflix challenge and only had 1GB of memory; Second - it is probably best to use a char array for the 40-bit storage to avoid any alignment issues and the need to mess with struct packing pragmas; Third - this design assumes that you're OK with 64-bit arithmetic for intermediate results, it is only for large array storage that you would use Int40; Fourth: I don't get all the suggestions that this is a bad idea, just read up on what people go through to pack mesh data structures and this looks like child's play by comparison).
What you want is a struct that is only used for storing data as 40-bit ints but implicitly converts to int64_t for arithmetic. The only trick is doing the sign extension from 40 to 64 bits right. If you're fine with unsigned ints, the code can be even simpler. This should be able to get you started.
#include <cstdint>
#include <iostream>
// Only intended for storage, automatically promotes to 64-bit for evaluation
struct Int40
{
Int40(int64_t x) { set(static_cast<uint64_t>(x)); } // implicit constructor
operator int64_t() const { return get(); } // implicit conversion to 64-bit
private:
void set(uint64_t x)
{
setb<0>(x); setb<1>(x); setb<2>(x); setb<3>(x); setb<4>(x);
};
int64_t get() const
{
return static_cast<int64_t>(getb<0>() | getb<1>() | getb<2>() | getb<3>() | getb<4>() | signx());
};
uint64_t signx() const
{
return (data[4] >> 7) * (uint64_t(((1 << 25) - 1)) << 39);
};
template <int idx> uint64_t getb() const
{
return static_cast<uint64_t>(data[idx]) << (8 * idx);
}
template <int idx> void setb(uint64_t x)
{
data[idx] = (x >> (8 * idx)) & 0xFF;
}
unsigned char data[5];
};
int main()
{
Int40 a = -1;
Int40 b = -2;
Int40 c = 1 << 16;
std::cout << "sizeof(Int40) = " << sizeof(Int40) << std::endl;
std::cout << a << "+" << b << "=" << (a+b) << std::endl;
std::cout << c << "*" << c << "=" << (c*c) << std::endl;
}
Here is the link to try it live: http://rextester.com/QWKQU25252
You can use a bit-field structure, but it's not going to save you any memory:
struct my_struct
{
unsigned long long a : 40;
unsigned long long b : 24;
};
You can squeeze any multiple of 8 such 40-bit variables into one structure:
struct bits_16_16_8
{
unsigned short x : 16;
unsigned short y : 16;
unsigned short z : 8;
};
struct bits_8_16_16
{
unsigned short x : 8;
unsigned short y : 16;
unsigned short z : 16;
};
struct my_struct
{
struct bits_16_16_8 a1;
struct bits_8_16_16 a2;
struct bits_16_16_8 a3;
struct bits_8_16_16 a4;
struct bits_16_16_8 a5;
struct bits_8_16_16 a6;
struct bits_16_16_8 a7;
struct bits_8_16_16 a8;
};
This will save you some memory (in comparison with using 8 "standard" 64-bit variables), but you will have to split every operation (and in particular arithmetic ones) on each of these variables into several operations.
So the memory-optimization will be "traded" for runtime-performance.
As the comments suggest, this is quite a task.
Probably an unnecessary hassle unless you want to save alot of RAM - then it makes much more sense. (RAM saving would be the sum total of bits saved across millions of long values stored in RAM)
I would consider using an array of 5 bytes/char (5 * 8 bits = 40 bits). Then you will need to shift bits from your (overflowed int - hence a long) value into the array of bytes to store them.
To use the values, then shift the bits back out into a long and you can use the value.
Then your RAM and file storage of the value will be 40 bits (5 bytes), BUT you must consider data alignment if you plan to use a struct to hold the 5 bytes. Let me know if you need elaboration on this bit shifting and data alignment implications.
Similarly, you could use the 64 bit long, and hide other values (3 chars perhaps) in the residual 24 bits that you do not want to use. Again - using bit shifting to add and remove the 24 bit values.
Another variation that may be helpful would be to use a structure:
typedef struct TRIPLE_40 {
uint32_t low[3];
uint8_t hi[3];
uint8_t padding;
};
Such a structure would take 16 bytes and, if 16-byte aligned, would fit entirely within a single cache line. While identifying which of the parts of the structure to use may be more expensive than it would be if the structure held four elements instead of three, accessing one cache line may be much cheaper than accessing two. If performance is important, one should use some benchmarks since some machines may perform a divmod-3 operation cheaply and have a high cost per cache-line fetch, while others might have have cheaper memory access and more expensive divmod-3.
If you have to deal with billions of integers, I'd try to encapuslate arrays of 40-bit numbers instead of single 40-bit numbers. That way, you can test different array implementations (e.g. an implementation that compresses data on the fly, or maybe one that stores less-used data to disk.) without changing the rest of your code.
Here's a sample implementation (http://rextester.com/SVITH57679):
class Int64Array
{
char* buffer;
public:
static const int BYTE_PER_ITEM = 5;
Int64Array(size_t s)
{
buffer=(char*)malloc(s*BYTE_PER_ITEM);
}
~Int64Array()
{
free(buffer);
}
class Item
{
char* dataPtr;
public:
Item(char* dataPtr) : dataPtr(dataPtr){}
inline operator int64_t()
{
int64_t value=0;
memcpy(&value, dataPtr, BYTE_PER_ITEM); // Assumes little endian byte order!
return value;
}
inline Item& operator = (int64_t value)
{
memcpy(dataPtr, &value, BYTE_PER_ITEM); // Assumes little endian byte order!
return *this;
}
};
inline Item operator[](size_t index)
{
return Item(buffer+index*BYTE_PER_ITEM);
}
};
Note: The memcpy-conversion from 40-bit to 64-bit is basically undefined behavior, as it assumes litte-endianness. It should work on x86-platforms, though.
Note 2: Obviously, this is proof-of-concept code, not production-ready code. To use it in real projects, you'd have to add (among other things):
error handling (malloc can fail!)
copy constructor (e.g. by copying data, add reference counting or by making the copy constructor private)
move constructor
const overloads
STL-compatible iterators
bounds checks for indices (in debug build)
range checks for values (in debug build)
asserts for the implicit assumptions (little-endianness)
As it is, Item has reference semantics, not value semantics, which is unusual for operator[]; You could probably work around that with some clever C++ type conversion tricks
All of those should be straightforward for a C++ programmer, but they would make the sample code much longer without making it clearer, so I've decided to omit them.
I'll assume that
this is C, and
you need a single, large array of 40 bit numbers, and
you are on a machine that is little-endian, and
your machine is smart enough to handle alignment
you have defined size to be the number of 40-bit numbers you need
unsigned char hugearray[5*size+3]; // +3 avoids overfetch of last element
__int64 get_huge(unsigned index)
{
__int64 t;
t = *(__int64 *)(&hugearray[index*5]);
if (t & 0x0000008000000000LL)
t |= 0xffffff0000000000LL;
else
t &= 0x000000ffffffffffLL;
return t;
}
void set_huge(unsigned index, __int64 value)
{
unsigned char *p = &hugearray[index*5];
*(long *)p = value;
p[4] = (value >> 32);
}
It may be faster to handle the get with two shifts.
__int64 get_huge(unsigned index)
{
return (((*(__int64 *)(&hugearray[index*5])) << 24) >> 24);
}
For the case of storing some billions of 40-bit signed integers, and assuming 8-bit bytes, you can pack 8 40-bit signed integers in a struct (in the code below using an array of bytes to do that), and, since this struct is ordinarily aligned, you can then create a logical array of such packed groups, and provide ordinary sequential indexing of that:
#include <limits.h> // CHAR_BIT
#include <stdint.h> // int64_t
#include <stdlib.h> // div, div_t, ptrdiff_t
#include <vector> // std::vector
#define STATIC_ASSERT( e ) static_assert( e, #e )
namespace cppx {
using Byte = unsigned char;
using Index = ptrdiff_t;
using Size = Index;
// For non-negative values:
auto roundup_div( const int64_t a, const int64_t b )
-> int64_t
{ return (a + b - 1)/b; }
} // namespace cppx
namespace int40 {
using cppx::Byte;
using cppx::Index;
using cppx::Size;
using cppx::roundup_div;
using std::vector;
STATIC_ASSERT( CHAR_BIT == 8 );
STATIC_ASSERT( sizeof( int64_t ) == 8 );
const int bits_per_value = 40;
const int bytes_per_value = bits_per_value/8;
struct Packed_values
{
enum{ n = sizeof( int64_t ) };
Byte bytes[n*bytes_per_value];
auto value( const int i ) const
-> int64_t
{
int64_t result = 0;
for( int j = bytes_per_value - 1; j >= 0; --j )
{
result = (result << 8) | bytes[i*bytes_per_value + j];
}
const int64_t first_negative = int64_t( 1 ) << (bits_per_value - 1);
if( result >= first_negative )
{
result = (int64_t( -1 ) << bits_per_value) | result;
}
return result;
}
void set_value( const int i, int64_t value )
{
for( int j = 0; j < bytes_per_value; ++j )
{
bytes[i*bytes_per_value + j] = value & 0xFF;
value >>= 8;
}
}
};
STATIC_ASSERT( sizeof( Packed_values ) == bytes_per_value*Packed_values::n );
class Packed_vector
{
private:
Size size_;
vector<Packed_values> data_;
public:
auto size() const -> Size { return size_; }
auto value( const Index i ) const
-> int64_t
{
const auto where = div( i, Packed_values::n );
return data_[where.quot].value( where.rem );
}
void set_value( const Index i, const int64_t value )
{
const auto where = div( i, Packed_values::n );
data_[where.quot].set_value( where.rem, value );
}
Packed_vector( const Size size )
: size_( size )
, data_( roundup_div( size, Packed_values::n ) )
{}
};
} // namespace int40
#include <iostream>
auto main() -> int
{
using namespace std;
cout << "Size of struct is " << sizeof( int40::Packed_values ) << endl;
int40::Packed_vector values( 25 );
for( int i = 0; i < values.size(); ++i )
{
values.set_value( i, i - 10 );
}
for( int i = 0; i < values.size(); ++i )
{
cout << values.value( i ) << " ";
}
cout << endl;
}
Yes, you can do that, and it will save some space for large quantities of numbers
You need a class that contains a std::vector of an unsigned integer type.
You will need member functions to store and to retrieve an integer. For example, if you want do store 64 integers of 40 bit each, use a vector of 40 integers of 64 bits each. Then you need a method that stores an integer with index in [0,64] and a method to retrieve such an integer.
These methods will execute some shift operations, and also some binary | and & .
I am not adding any more details here yet because your question is not very specific. Do you know how many integers you want to store? Do you know it during compile time? Do you know it when the program starts? How should the integers be organized? Like an array? Like a map? You should know all this before trying to squeeze the integers into less storage.
There are quite a few answers here covering implementation, so I'd like to talk about architecture.
We usually expand 32-bit values to 64-bit values to avoid overflowing because our architectures are designed to handle 64-bit values.
Most architectures are designed to work with integers whose size is a power of 2 because this makes the hardware vastly simpler. Tasks such as caching are much simpler this way: there are a large number of divisions and modulus operations which can be replaced with bit masking and shifts if you stick to powers of 2.
As an example of just how much this matters, The C++11 specification defines multithreading race-cases based on "memory locations." A memory location is defined in 1.7.3:
A memory location is either an object of scalar type or a maximal
sequence of adjacent bit-fields all having non-zero width.
In other words, if you use C++'s bitfields, you have to do all of your multithreading carefully. Two adjacent bitfields must be treated as the same memory location, even if you wish computations across them could be spread across multiple threads. This is very unusual for C++, so likely to cause developer frustration if you have to worry about it.
Most processors have a memory architecture which fetches 32-bit or 64-bit blocks of memory at a time. Thus use of 40-bit values will have a surprising number of extra memory accesses, dramatically affecting run-time. Consider the alignment issues:
40-bit word to access: 32-bit accesses 64bit-accesses
word 0: [0,40) 2 1
word 1: [40,80) 2 2
word 2: [80,120) 2 2
word 3: [120,160) 2 2
word 4: [160,200) 2 2
word 5: [200,240) 2 2
word 6: [240,280) 2 2
word 7: [280,320) 2 1
On a 64 bit architecture, one out of every 4 words will be "normal speed." The rest will require fetching twice as much data. If you get a lot of cache misses, this could destroy performance. Even if you get cache hits, you are going to have to unpack the data and repack it into a 64-bit register to use it (which might even involve a difficult to predict branch).
It is entirely possible this is worth the cost
There are situations where these penalties are acceptable. If you have a large amount of memory-resident data which is well indexed, you may find the memory savings worth the performance penalty. If you do a large amount of computation on each value, you may find the costs are minimal. If so, feel free to implement one of the above solutions. However, here are a few recommendations.
Do not use bitfields unless you are ready to pay their cost. For example, if you have an array of bitfields, and wish to divide it up for processing across multiple threads, you're stuck. By the rules of C++11, the bitfields all form one memory location, so may only be accessed by one thread at a time (this is because the method of packing the bitfields is implementation defined, so C++11 can't help you distribute them in a non-implementation defined manner)
Do not use a structure containing a 32-bit integer and a char to make 40 bytes. Most processors will enforce alignment and you wont save a single byte.
Do use homogenous data structures, such as an array of chars or array of 64-bit integers. It is far easier to get the alignment correct. (And you also retain control of the packing, which means you can divide an array up amongst several threads for computation if you are careful)
Do design separate solutions for 32-bit and 64-bit processors, if you have to support both platforms. Because you are doing something very low level and very ill-supported, you'll need to custom tailor each algorithm to its memory architecture.
Do remember that multiplication of 40-bit numbers is different from multiplication of 64-bit expansions of 40-bit numbers reduced back to 40-bits. Just like when dealing with the x87 FPU, you have to remember that marshalling your data between bit-sizes changes your result.
This begs for streaming in-memory lossless compression. If this is for a Big Data application, dense packing tricks are tactical solutions at best for what seems to require fairly decent middleware or system-level support. They'd need thorough testing to make sure one is able to recover all the bits unharmed. And the performance implications are highly non-trivial and very hardware-dependent because of interference with the CPU caching architecture (e.g. cache lines vs packing structure). Someone mentioned complex meshing structures : these are often fine-tuned to cooperate with particular caching architectures.
It's not clear from the requirements whether the OP needs random access. Given the size of the data it's more likely one would only need local random access on relatively small chunks, organised hierarchically for retrieval. Even the hardware does this at large memory sizes (NUMA). Like lossless movie formats show, it should be possible to get random access in chunks ('frames') without having to load the whole dataset into hot memory (from the compressed in-memory backing store).
I know of one fast database system (kdb from KX Systems to name one but I know there are others) that can handle extremely large datasets by seemlessly memory-mapping large datasets from backing store. It has the option to transparently compress and expand the data on-the-fly.
If what you really want is an array of 40 bit integers (which obviously you can't have), I'd just combine one array of 32 bit and one array of 8 bit integers.
To read a value x at index i:
uint64_t x = (((uint64_t) array8 [i]) << 32) + array32 [i];
To write a value x to index i:
array8 [i] = x >> 32; array32 [i] = x;
Obviously nicely encapsulated into a class using inline functions for maximum speed.
There is one situation where this is suboptimal, and that is when you do truly random access to many items, so that each access to an int array would be a cache miss - here you would get two cache misses every time. To avoid this, define a 32 byte struct containing an array of six uint32_t, an array of six uint8_t, and two unused bytes (41 2/3rd bits per number); the code to access an item is slightly more complicated, but both components of the item are in the same cache line.
I am looking for a string container suited for large number of strings (> 10^9). Strings have variable length. It must be fast for insertions and lookup and have frugal memory use. Strings are unordered when container is filled. Average string length is about 10 bytes. Lookup done on exact string value. Erasability - optional. N is unknown in advance. For 64bit architecture. Use case - think about associative array for AWK.
map<string> have about 20-40 bites overhead per string and each insertion calls one malloc (or two). So it is not fast and not frugal.
Can someone point me to a C/C++ library, a data structure or a paper?
Relavant -- Comparison of Hash Table Libraries
EDIT
I've removed "big data", bumped N to larger value, clarified requirements.
There is no silver bullet, but a radix tree gives the advantages of a trie (fast look up and insert, at least asymptotically) - with a better space consumption.
However - both are considered not "cache efficient" - which might be significant especially if iteration over the data is required at some point.
For your problem, a pointer on a 64bit machine nearly matches the length of your data. So using multiple pointers per string in your problem (average length less than 10 bytes) would make the size of the data structure dominate the size of your input.
One general way to deal with that is to not use pointers to represent your strings. A specialized representation using a 32bit offset into a large page where all your strings get stored would halve the pointer memory requirements for you, at the cost of needing to do an addition to a pointer to retrieve your string.
Edit: Below is a sample (untested) implementation of such a representation (using struct for the sake of simplicity, actual implementation would of course only make the user interface public). The representation assumes a hash table insertion, hence leaving room for a next_. Note that the offsets are scaled by size of hash_node to allow for representation in a 32bit offset.
struct hash_node {
uint32_t next_;
char * str () { return (const char *)(&next+1); }
const char * str () const { return (const char *)(&next+1); }
};
struct hash_node_store {
std::vector<hash_node> page_; /* holds the allocated memory for nodes */
uint32_t free_;
hash_node * ptr (uint32_t offset) {
if (offset == 0) return 0;
return &page_[offset-1];
}
uint32_t allocate (const char *str) {
hash_node *hn = ptr(free_);
uint32_t len = strlen(str) + 1;
uint32_t node_size =
1 + (len / sizeof(hash_node)) + !!(len % sizeof(hash_node));
strcpy(hn->str(), str);
free_ += node_size;
return 1 + (hn - &page_[0]);
}
};
A hash table would contain a node store, and a hash bucket vector.
struct hash_table {
hash_node_store store_;
std::vector<uint32_t> table_; /* holds allocated memory for buckets */
uint32_t hash_func (const char *s) { /* ... */ }
uint32_t find_at (uint32_t node_offset, const char *str);
bool insert_at (uint32_t &node_offset, const char *str);
bool insert (const char *str) {
uint32_t bucket = hash_func(s) % table_.size();
return insert_at(table_[bucket], str);
}
bool find (const char *str) {
uint32_t bucket = hash_func(s) % table_.size();
return find_at(table_[bucket], str);
}
};
Where find_at and insert_at are just simple functions implemented in the expected way.
uint32_t hash_table::find_at (uint32_t node_offset, const char *str) {
hash_node *hn = store_.ptr(node_offset);
while (hn) {
if (strcmp(hn->str(), str) == 0) break;
node_offset = hn->next_;
hn = store_.ptr(node_offset);
}
return node_offset;
}
bool hash_table::insert_at (uint32_t &node_offset, const char *str) {
if (! find_at(node_offset, str)) {
uint32_t new_node = store_.allocate(str);
store_.ptr(new_node)->next_ = node_offset;
node_offset = new_node;
return true;
}
return false;
}
As you're only inserting values, the string data itself can be concatenated as it's inserted - each with a delimiter character such as NUL. The character offset into that single buffer uniquely identifies the string. This means that sets of strings that shared a common substring will each be completely redundantly individually specified, but countering that no effort will be spent trying to find or encoding such factoring: that could backfire for highly unrelated string values (e.g. random text).
To find the strings, a hash table could be used. Given your aim of avoiding frequent dynamic memory allocations, to handle collisions efficiently you'd need to use displacement lists: the idea is that when inserting a string that hashes to an already used bucket, you add an offset (wrapping around the table if necessary) and try that other bucket, continuing until an empty bucket it found. This means you need a list of displacements to try: you can hand-code a finite list(s) to get you started, or even potentially nesting loops over a "big displacement" list whose values are added to those from a "small displacement" list until an empty bucket is found, e.g. two hand coded lists of 10 displacements yield 100 combinations. (Alternative hashing algorithms can be used instead of or combined with displacement lists.) You do need to have a reasonable ratio of total to used buckets though... I'd expect something around 1.2 to work ok typically, with larger values prioritorising speed over space - you could populate your system with sample data and tune to taste.
So, the space requirement is:
total_bytes_of_string_data + N delimiters + total_buckets * sizeof(string_offset)
Where sizeof(string_offset) probably needs 8 bytes as 10^9 * 10 is already more than 2^32.
For 10^9 strings of ~10 characters and 1.2*10^9 buckets, this is around 10^9 * (10+1) + 1.2*10^9 * 8 bytes = 20.6^10^9 bytes or 19.1 GB.
It's worth noting that 64 bit virtual address space means you can safely allocate much more space for the concatenated string data and hash table than you actually need, and only those pages actually accessed will require virtual memory (initially physical memory, but it could later be swapped to disk through normal virtual memory mechanisms).
Discussion
There's no way to guarantee reduction in the string memory usage without assumptions / insights about repetitions in the string data or character set used.
If all the insertions were followed by a huge number of searches, sorting the string data and using binary searches would be an ideal solution. But, for quick insertions interspersed with searches the above is reasonable.
You could also have an index based around balanced binary trees, but to avoid memory allocations for each insertion you'd need to group a lot of nodes into one memory page and manually manage the ordering and splitting thereof on a less granular level: painful to implement. There might be a library doing it already but I haven't heard of one.
You've added "associative arrays in AWK" as an example of what this could be used for. You could simply embed each mapped-to value immediately after its string key in the concatenated data.
Is a (low) false positive rate acceptable? If so, then Bloom filters would be an option. If you'd be content with a false positive rate of one in a million, or 2^(-20), you'd want to use a buffer size in bits of around 30 times the number of strings you expect, or 3*10^10 bits. That's less than 4GB. You'd also need around 20 independent hash functions.
If you can't accept false positives, you should consider putting a Bloom filter in front of whatever other solution you build, to weed out most negatives really quickly.
I want to define my own datatype that can hold a single one of six possible values in order to learn more about memory management in c++. In numbers, I want to be able to hold 0 through 5. Binary, It would suffice with three bits (101=5), although some (6 and 7) wont be used. The datatype should also consume as little memory as possible.
Im not sure on how to accomplish this. First, I tried an enum with defined values for all the fields. As far as I know, the values are in hex there, so one "hexbit" should allow me to store 0 through 15. But comparing it to a char (with sizeof) it stated that its 4 times the size of a char, and a char holds 0 through 255 if Im not misstaken.
#include <iostream>
enum Foo
{
a = 0x0,
b = 0x1,
c = 0x2,
d = 0x3,
e = 0x4,
f = 0x5,
};
int main()
{
Foo myfoo = a;
char mychar = 'a';
std::cout << sizeof(myfoo); // prints 4
std::cout << sizeof(mychar); // prints 1
return 1;
}
Ive clearly misunderstood something, but fail to see what, so I turn to SO. :)
Also, when writing this post I realised that I clearly lack some parts of the vocabulary. Ive made this post a community wiki, please edit it so I can learn the correct words for everything.
A char is the smallest possible type.
If you happen to know that you need several such 3 bit values in a single place you get use a structure with bitfield syntax:
struct foo {
unsigned int val1:3;
unsigned int val2:3;
};
and hence get 2 of them within one byte. In theory you could pack 10 such fields into a 32-bit "int" value.
C++ 0x will contain Strongly typed enumerations where you can specify the underlying datatype (in your example char), but current C++ does not support this. The standard is not clear about the use of a char here (the examples are with int, short and long), but they mention the underlying integral type and that would include char as well.
As of today Neil Butterworth's answer to create a class for your problem seems the most elegant, as you can even extend it to contain a nested enumeration if you want symbolical names for the values.
C++ does not express units of memory smaller than bytes. If you're producing them one at a time, That's the best you can do. Your own example works well. If you need to get just a few, You can use bit-fields as Alnitak suggests. If you're planning on allocating them one at a time, then you're even worse off. Most archetectures allocate page-size units, 16 bytes being common.
Another choice might be to wrap std::bitset to do your bidding. This will waste very little space, if you need many such values, only about 1 bit for every 8.
If you think about your problem as a number, expressed in base-6, and convert that number to base two, possibly using an Unlimited precision integer (for example GMP), you won't waste any bits at all.
This assumes, of course, that you're values have a uniform, random distribution. If they follow a different distribution, You're best bet will be general compression of the first example, with something like gzip.
You can store values smaller than 8 or 32 bits. You just need to pack them into a struct (or class) and use bit fields.
For example:
struct example
{
unsigned int a : 3; //<Three bits, can be 0 through 7.
bool b : 1; //<One bit, the stores 0 or 1.
unsigned int c : 10; //<Ten bits, can be 0 through 1023.
unsigned int d : 19; //<19 bits, can be 0 through 524287.
}
In most cases, your compiler will round up the total size of your structure to 32 bits on a 32 bit platform. The other problem is, like you pointed out, that your values may not have a power of two range. This will make for wasted space. If you read the entire struct as one number, you will find values that will be impossible to set, if your input ranges aren't all powers of 2.
Another feature you may find interesting is a union. They work like a struct, but share memory. So if you write to one field it overwrites the others.
Now, if you are really tight for space, and you want to push each bit to the maximum, there is a simple encoding method. Let's say you want to store 3 numbers, each can be from 0 to 5. Bit fields are wasteful, because if you use 3 bits each, you'll waste some values (i.e. you could never set 6 or 7, even though you have room to store them). So, lets do an example:
//Here are three example values, each can be from 0 to 5:
const int one = 3, two = 4, three = 5;
To pack them together most efficiently, we should think in base 6 (since each value is from 0-5). So packed into the smallest possible space is:
//This packs all the values into one int, from 0 - 215.
//pack could be any value from 0 - 215. There are no 'wasted' numbers.
int pack = one + (6 * two) + (6 * 6 * three);
See how it looks like we're encoding in base six? Each number is multiplied by it's place like 6^n, where n is the place (starting at 0).
Then to decode:
const int one = pack % 6;
pack /= 6;
const int two = pack % 6;
pack /= 6;
const int three = pack;
Theses schemes are extremely handy when you have to encode some fields in a bar code or in an alpha numeric sequence for human typing. Just saying those few partial bits can make a huge difference. Also, the fields don't all have to have the same range. If one field is from 0 through 7, you'd use 8 instead of 6 in the proper place. There is no requirement that all fields have the same range.
Minimal size what you can use - 1 byte.
But if you use group of enum values ( writing in file or storing in container, ..), you can pack this group - 3 bits per value.
You don't have to enumerate the values of the enum:
enum Foo
{
a,
b,
c,
d,
e,
f,
};
Foo myfoo = a;
Here Foo is an alias of int, which on your machine takes 4 bytes.
The smallest type is char, which is defined as the smallest addressable data on the target machine. The CHAR_BIT macro yields the number of bits in a char and is defined in limits.h.
[Edit]
Note that generally speaking you shouldn't ask yourself such questions. Always use [unsigned] int if it's sufficient, except when you allocate quite a lot of memory (e.g. int[100*1024] vs char[100*1024], but consider using std::vector instead).
The size of an enumeration is defined to be the same of an int. But depending on your compiler, you may have the option of creating a smaller enum. For example, in GCC, you may declare:
enum Foo {
a, b, c, d, e, f
}
__attribute__((__packed__));
Now, sizeof(Foo) == 1.
The best solution is to create your own type implemented using a char. This should have sizeof(MyType) == 1, though this is not guaranteed.
#include <iostream>
using namespace std;
class MyType {
public:
MyType( int a ) : val( a ) {
if ( val < 0 || val > 6 ) {
throw( "bad value" );
}
}
int Value() const {
return val;
}
private:
char val;
};
int main() {
MyType v( 2 );
cout << sizeof(v) << endl;
cout << v.Value() << endl;
}
It is likely that packing oddly sized values into bitfields will incur a sizable performance penalty due to the architecture not supporting bit-level operations (thus requiring several processor instructions per operation). Before you implement such a type, ask yourself if it is really necessary to use as little space as possible, or if you are committing the cardinal sin of programming that is premature optimization. At most, I would encapsulate the value in a class whose backing store can be changed transparently if you really do need to squeeze every last byte for some reason.
You can use an unsigned char. Probably typedef it into an BYTE. It will occupy only one byte.