composition of a number where a number is very large - c++

I am trying to calculate number of ways of composition of a number using numbers 1 and 2.
This can be found using fibonacci series where F(1)=1 and F(2)=2 and
F(n)=F(n-1)+F(n-2)
Since F(n) can be very large I just need F(n)%1000000007.To speed up the process I am using fibonacci exponentiation .I have written two codes for the same problem(both are almost similar).But one of them fails for large numbers.I am not able to figure out which one is correct ?
CODE 1
http://ideone.com/iCPEyz
CODE 2
http://ideone.com/Un5p2S
Though I have a feeling first one should be correct.I am not able to figure what would happen when there is a case like when we are multiplying say a and b and value of a has already exceeded the upper limit of a and when we multiply this by b ,then how sure can I be that a*b is correct. As per my knowledge if a value is above its data type limits then the value starts again from the lowest value like in below example.
#include<iostream>
#include<limits.h>
using namespace std;
int main()
{
cout<<UINT_MAX<<endl;
cout<<UINT_MAX+2;
}
Output
4294967295
1

"Overflow" (you don't really call it that for unsigneds, they wrap around) of unsigned n-bit types will preserve values modulo 2^n only, not modulo an arbitrary modulus (how could they? Try to reproduce the steps with pen and paper). You therefore have to make sure that no operation ever goes over the limits of your type in order to maintain correct results mod 100000007.

Related

An efficient way to calculate extremely large powers of 2

I am solving a problem which requires me to calculate the sum of squares of all possible subsets of a set. I am required to return this sum, modulo 10^9+7
I have understood the logic. I just need to sum the squares and multiply the result by 2^N-1, where N is the size of the set.
But the issue is that N can be as big as 10^5.
And for this, I am getting an integer overflow.
I looked into fast modular exponentiation but still where would I store something as huge as 2^100000 ?
Can I use the modulo as I calculate the power of 2, to keep the number down? Wouldn't that change the final value?
If anyone can tell me how to get it or what to read into, it would be really helpful.
If you modulo some value with 2^something_big it just means that you don't have to output bits beyond something_big. For instance x%power(2,10) == x%(1<<10) == x&(1<<10 - 1) == x&1023.
So in your case, the problem is computing the actual value before the modulo while keeping in mind that you only need 99999 bits. All higher bits are to be dropped (and should not influence the result if I understand your premise correctly).
Btw. storing 99999 bits is doable. It's just 13kB.

Optimal way to do unsigned integer multiplication (with overflow) followed by division by the maximum value

I have the following problem - I have 3 unsigned integers: a,b,c. c is the maximum value that the unsigned integer type I am using can take(so a<=c && b<=c). I know that a*b may overflow, and I know that the number (a * b / c) does not overflow (basically I need the number that one would get if he casts a,b,c to an unsigned integer type with enough bits and performs the multiplication and division). What is the fastest way to find the number (a * b / c) without having to cast to an unsigned integer type with more bits (preferably with the lowest error possible)?
I am currently using float casting, and I was wondering whether there was a method that produces results that are better and faster. I know that a*b/c can be expressed as either a/(c/b) or b/(c/a), with the option of expanding the remainder too, but depending on how many times I expand it, the error can be big, and I am not exactly sure how that compares in terms of speed to float casting and performing the division in float. I have looked at: Avoiding overflow in integer multiplication followed by division though I was hoping that with the given additional information about a,b,c a better/faster method may be used. The post I linked also doesn't mention anything about speed.
Edit: It's also preferable for the method to work fast on the GPU too, but not necessary.
Edit2: If anybody's wondering why I would need this - I am using uint to represent numbers in [0,1], since I need arithmetic operations to not accumulate error.

Fast, unbiased, integer pseudo random generator with arbitrary bounds

For a monte carlo integration process, I need to pull a lot of random samples from
a histogram that has N buckets, and where N is arbitrary (i.e. not a power of two) but
doesn't change at all during the course of the computation.
By a lot, I mean something on the order of 10^10, 10 billions, so pretty much any
kind of lengthy precomputation is likely worth it in the face of the sheer number of
samples).
I have at my disposal a very fast uniform pseudo random number generator that
typically produces unsigned 64 bits integers (all the ints in the discussion
below are unsigned).
The naive way to pull a sample : histogram[ prng() % histogram.size() ]
The naive way is very slow: the modulo operation is using an integer division (IDIV)
which is terribly expensive and the compiler, not knowing the value of histogram.size()
at compile time, can't be up to its usual magic (i.e. http://www.azillionmonkeys.com/qed/adiv.html)
As a matter of fact, the bulk of my computation time is spent extracting that darn modulo.
The slightly less naive way: I use libdivide (http://libdivide.com/) which is capable
of pulling off a very fast "divide by a constant not known at compile time".
That gives me a very nice win (25% or so), but I have a nagging feeling that I can do
better, here's why:
First intuition: libdivide computes a division. What I need is a modulo, and to get there
I have to do an additional mult and a sub : mod = dividend - divisor*(uint64_t)(dividend/divisor). I suspect there might be a small win there, using libdivide-type
techniques that produce the modulo directly.
Second intuition: I am actually not interested in the modulo itself. What I truly want is
to efficiently produce a uniformly distributed integer value that is guaranteed to be strictly smaller than N.
The modulo is a fairly standard way of getting there, because of two of its properties:
A) mod(prng(), N) is guaranteed to be uniformly distributed if prng() is
B) mod(prgn(), N) is guaranteed to belong to [0,N[
But the modulo is/does much more that just satisfy the two constraints above, and in fact
it does probably too much work.
All need is a function, any function that obeys constraints A) and B) and is fast.
So, long intro, but here comes my two questions:
Is there something out there equivalent to libdivide that computes integer modulos directly ?
Is there some function F(X, N) of integers X and N which obeys the following two constraints:
If X is a random variable uniformly distributed then F(X,N) is also unirformly distributed
F(X, N) is guranteed to be in [0, N[
(PS : I know that if N is small, I do not need to cunsume all the 64 bits coming out of
the PRNG. As a matter of fact, I already do that. But like I said, even that optimization
is a minor win when compare to the big fat loss of having to compute a modulo).
Edit : prng() % N is indeed not exactly uniformly distributed. But for N large enough, I don't think it's much of problem (or is it ?)
Edit 2 : prng() % N is indeed potentially very badly distributed. I had never realized how bad it could get. Ouch. I found a good article on this : http://ericlippert.com/2013/12/16/how-much-bias-is-introduced-by-the-remainder-technique
Under the circumstances, the simplest approach may work the best. One extremely simple approach that might work out if your PRNG is fast enough would be to pre-compute one less than the next larger power of 2 than your N to use as a mask. I.e., given some number that looks like 0001xxxxxxxx in binary (where x means we don't care if it's a 1 or a 0) we want a mask like 000111111111.
From there, we generate numbers as follows:
Generate a number
and it with your mask
if result > n, go to 1
The exact effectiveness of this will depend on how close N is to a power of 2. Each successive power of 2 is (obviously enough) double its predecessor. So, in the best case N is exactly one less than a power of 2, and our test in step 3 always passes. We've added only a mask and a comparison to the time taken for the PRNG itself.
In the worst case, N is exactly equal to a power of 2. In this case, we expect to throw away roughly half the numbers we generated.
On average, N ends up roughly halfway between powers of 2. That means, on average, we throw away about one out of four inputs. We can nearly ignore the mask and comparison themselves, so our speed loss compared to the "raw" generator is basically equal to the number of its outputs that we discard, or 25% on average.
If you have fast access to the needed instruction, you could 64-bit multiply prng() by N and return the high 64 bits of the 128-bit result. This is sort of like multiplying a uniform real in [0, 1) by N and truncating, with bias on the order of the modulo version (i.e., practically negligible; a 32-bit version of this answer would have small but perhaps noticeable bias).
Another possibility to explore would be use word parallelism on a branchless modulo algorithm operating on single bits, to get random numbers in batches.
Libdivide, or any other complex ways to optimize that modulo are simply overkill. In a situation as yours, the only sensible approach is to
ensure that your table size is a power of two (add padding if you must!)
replace the modulo operation with a bitmask operation. Like this:
size_t tableSize = 1 << 16;
size_t tableMask = tableSize - 1;
...
histogram[prng() & tableMask]
A bitmask operation is a single cycle on any CPU that is worth its money, you can't beat its speed.
--
Note:
I don't know about the quality of your random number generator, but it may not be a good idea to use the last bits of the random number. Some RNGs produce poor randomness in the last bits and better randomness in the upper bits. If that is the case with your RNG, use a bitshift to get the most significant bits:
size_t bitCount = 16;
...
histogram[prng() >> (64 - bitCount)]
This is just as fast as the bitmask, but it uses different bits.
You could extend your histogram to a "large" power of two by cycling it, filling in the trailing spaces with some dummy value (guaranteed to never occur in the real data). E.g. given a histogram
[10, 5, 6]
extend it to length 16 like so (assuming -1 is an appropriate sentinel):
[10, 5, 6, 10, 5, 6, 10, 5, 6, 10, 5, 6, 10, 5, 6, -1]
Then sampling can be done via a binary mask histogram[prng() & mask] where mask = (1 << new_length) - 1, with a check for the sentinel value to retry, that is,
int value;
do {
value = histogram[prng() & mask];
} while (value == SENTINEL);
// use `value` here
The extension is longer than necessary to make retries unlikely by ensuring that the vast majority of the elements are valid (e.g. in the example above only 1/16 lookups will "fail", and this rate can be reduced further by extending it to e.g. 64). You could even use a "branch prediction" hint (e.g. __builtin_expect in GCC) on the check so that the compiler orders code to be optimal for the case when value != SENTINEL, which is hopefully the common case.
This is very much a memory vs. speed trade-off.
Just a few ideas to complement the other good answers:
What percent of time is spent in the modulo operation, and how do you know what that percent is? I only ask because sometimes people say something is terribly slow when in fact it is less than 10% of the time and they only think it's big because they're using a silly self-time-only profiler. (I have a hard time envisioning a modulo operation taking a lot of time compared to a random number generator.)
When does the number of buckets become known? If it doesn't change too frequently, you can write a program-generator. When the number of buckets changes, automatically print out a new program, compile, link, and use it for your massive execution.
That way, the compiler will know the number of buckets.
Have you considered using a quasi-random number generator, as opposed to a pseudo-random generator? It can give you higher precision of integration in much fewer samples.
Could the number of buckets be reduced without hurting the accuracy of the integration too much?
The non-uniformity dbaupp cautions about can be side-stepped by rejecting&redrawing values no less than M*(2^64/M) (before taking the modulus).
If M can be represented in no more than 32 bits, you can get more than one value less than M by repeated multiplication (see David Eisenstat's answer) or divmod; alternatively, you can use bit operations to single out bit patterns long enough for M, again rejecting values no less than M.
(I'd be surprised at modulus not being dwarfed in time/cycle/energy consumption by random number generation.)
To feed the bucket, you may use std::binomial_distribution to directly feed each bucket instead of feeding the bucket one sample by one sample:
Following may help:
int nrolls = 60; // number of experiments
const std::size_t N = 6;
unsigned int bucket[N] = {};
std::mt19937 generator(time(nullptr));
for (int i = 0; i != N; ++i) {
double proba = 1. / static_cast<double>(N - i);
std::binomial_distribution<int> distribution (nrolls, proba);
bucket[i] = distribution(generator);
nrolls -= bucket[i];
}
Live example
Instead of integer division you can use fixed point math, i.e integer multiplication & bitshift. Say if your prng() returns values in range 0-65535 and you want this quantized to range 0-99, then you do (prng()*100)>>16. Just make sure that the multiplication doesn't overflow your integer type, so you may have to shift the result of prng() right. Note that this mapping is better than modulo since it's retains the uniform distribution.
Thanks everyone for you suggestions.
First, I am now thoroughly convinced that modulo is really evil.
It is both very slow and yields incorrect results in most cases.
After implementing and testing quite a few of the suggestions, what
seems to be the best speed/quality compromise is the solution proposed
by #Gene:
pre-compute normalizer as:
auto normalizer = histogram.size() / (1.0+urng.max());
draw samples with:
return histogram[ (uint32_t)floor(urng() * normalizer);
It is the fastest of all methods I've tried so far, and as far as I can tell,
it yields a distribution that's much better, even if it may not be as perfect
as the rejection method.
Edit: I implemented David Eisenstat's method, which is more or less the same as Jarkkol's suggestion : index = (rng() * N) >> 32. It works as well as the floating point normalization and it is a little faster (9% faster in fact). So it is my preferred way now.

Sieve of Eratosthenes using precalculated primes

I've all prime numbers that can be stored in 32bit unsigned int and I want to use them to generate some 64bit prime numbers. using trial division is too slow even with optimizations in logic and compilation.
I'm trying to modify Sieve of Eratosthenes to work with the predefined list, as follow:
in array A from 2 to 4294967291
in array B from 2^32 to X inc by 1
find C which is first multiple of current prime.
from C mark and jump by current prime till X.
go to 1.
The problem is step 3 which use modulus to find the prime multiple, such operation is the reason i didn't use trail division.
Is there any better way to implement step 3 or the whole algorithm.
thank you.
Increment by 2, not 1. That's the minimal optimization you should always use - working with odds only. No need to bother with the evens.
In C++, use vector<bool> for the sieve array. It gets automatically bit-packed.
Pre-calculate your core primes with segmented sieve. Then continue to work by big enough segments that fit in your cache, without adding new primes to the core list. For each prime p maintain additional long long int value: its current multiple (starting from the prime's square, of course). The step value is twice p in value, or p offset in the odds-packed sieve array, where the i-th entry stands for the number o + 2i, o being the least odd not below the range start. No need to sort by the multiples' values, the upper bound of core primes' use rises monotonically.
sqrt(0xFFFFFFFFFF) = 1048576. PrimePi(1048576)=82025 primes is all you need in your core primes list. That's peanuts.
Integer arithmetics for long long ints should work just fine to find the modulo, and so the smallest multiple in range, when you first start (or resume your work).
See also a related answer with pseudocode, and another with C code.

Using bitwise & instead of modulus operator to randomly sample integers from a range

I need to randomly sample from a uniform distribution of integers over the interval [LB,UB] in C++. To do so, I start with a "good" RN generator (from Numerical Recipes 3rd ed.) that uniformly randomly samples 64-bit integers; let's call it int64().
Using the mod operator, I can sample from the integers in [LB,UB] by:
LB+int64()%(UB-LB+1);
The only issue with using the mod operator is the slowness of the integer division. So, I then tried the method suggested here, which is:
LB + (int64()&(UB-LB))
The bitwise & method is about 3 times as fast. This is huge for me, because one of my simulations in C++ needs to randomly sample about 20 million integers.
But there's 1 big problem. When I analyze the integers sampled using the bitwise & method, they don't appear uniformly distributed over the interval [LB,UB]. The integers are indeed sampled from [LB,UB], but only from the even integers in that range. For example, here is a histogram of 5000 integers sampled from [20,50] using the bitwise & method:
By comparison, here is what a similar histogram looks like when using the mod operator method, which of course works fine:
What's wrong with my bitwise & method? Is there any way to modify it so that both even and odd numbers are sampled over the defined interval?
The bitwise & operator looks at each pair of corresponding bits of its operands, performs an and using only those two bits, and puts that result in the corresponding bit of the result.
So, if the last bit of UB-LB is 0, then the last bit of the result is 0. That is to say, if UB-LB is even then every output will be even.
The & is inappropriate to the purpose, unless UB-LB+1 is a power of 2. If you want to find a modulus, then there's no general shortcut: the compiler will already implement % the fastest way it knows.
Note that I said no general shortcut. For particular values of UB-LB, known at compile time, there can be faster ways. And if you can somehow arrange for UB and LB to have values that the compiler can compute at compile time then it will use them when you write %.
By the way, using % does not in fact produce uniformly-distributed integers over the range, unless the size of the range is a power of 2. Otherwise there must be a slight bias in favour of certain values, because the range of your int64() function cannot be assigned equally across the desired range. It may be that the bias is too small to affect your simulation in particular, but bad random number generators have broken random simulations in the past, and will do so again.
If you want a uniform random number distribution over an arbitrary range, then use std::uniform_int_distribution from C++11, or the class of the same name in Boost.
This works well if the range difference (UB-LB) is 2n-1, but won't work at all well if for example 2n.
The two are equivalent only when the size of the interval is a power of two. In general y%x and y&(x-1) are not the same.
For example, x%5 produces numbers from 0 to 4 (or to -4, for negative x), but x&4 produces either 0 or 4, never 1, 2, or 3, because of how bitwise operators work...