Mersenne Twister seed has no effect - c++

So I've got a custom randomizer class that uses a Mersenne Twister (the code I use is adapted from this site). All seemed to be working well, until I started testing different seeds (I normally use 42 as a seed, to ensure that each time I run my program, the results are the same, so I can see how code changes influence things).
It turns out that, no matter what seed I choose, the code produces the exact same series of numbers each time. Clearly I'm doing something wrong, but I don't know what. Here is my seed function:
void Randomizer::Seed(unsigned long int Seed)
{
int ii;
x[0] = Seed & 0xffffffffUL;
for (ii = 0; ii < N; ii++)
{
x[ii] = (1812433253UL * (x[ii - 1] ^ (x[ii - 1] >> 30)) + ii);
x[ii] &= 0xffffffffUL;
}
}
And this is my Rand() function
unsigned long int Randomizer::Rand()
{
unsigned long int Result;
unsigned long int a;
int ii;
// Refill x if exhausted
if (Next == N)
{
Next = 0;
for (ii = 0; ii < N - 1; ii++)
{
Result = (x[ii] & U) | x[ii + 1] & L;
a = (Result & 0x1UL) ? A : 0x0UL;
x[ii] = x[( ii + M) % N] ^ (Result >> 1) ^ a;
}
Result = (x[N - 1] & U) | x[0] & L;
a = (Result & 0x1UL) ? A : 0x0UL;
x[N - 1] = x[M - 1] ^ (Result >> 1) ^ a;
}
Result = x[Next++];
//Improves distribution
Result ^= (Result >> 11);
Result ^= (Result << 7) & 0x9d2c5680UL;
Result ^= (Result << 15) & 0xefc60000UL;
Result ^= (Result >> 18);
return Result;
}
The various values are:
#define A 0x9908b0dfUL
#define U 0x80000000UL
#define L 0x7fffffffUL
int Randomizer::N = 624;
int Randomizer::M = 397;
int Randomizer::Next = 0;
unsigned long Randomizer::x[624];
Can anyone help me figure out why different seeds don't result in different sequences of numbers?

Your Seed() function assigns to x[0], then starts looping at ii=0, which overwrites x[0] with an undefined value (it references x[-1]). Start your loop at 1, and you'll probably be all set.
Writing your own randomizer is dangerous. Why? It's hard to get right (see above), it's hard to know if you've done it right, and if it's wrong, things that rely on correctly distributed random numbers will not work quite right. Hopefully that thing is not cryptography or statistical modeling where the tails matter.... Think about using std::random, or if you're not on C++11 yet, boost::random.

Related

How to calculate EXTREMELY big binomial coefficients modulo by prime number?

This problem's answer turns out to be calculating large binomial coefficients modulo prime number using Lucas' theorem. Here's the solution to that problem using this technique: here.
Now my questions are:
Seems like my code expires if the data increases due to overflow of variables. Any ways to handle this?
Are there any ways to do this without using this theorem?
EDIT: note that as this is an OI or ACM problem, external libs other than original ones are not permitted.
Code below:
#include <iostream>
#include <string.h>
#include <stdio.h>
using namespace std;
#define N 100010
long long mod_pow(int a,int n,int p)
{
long long ret=1;
long long A=a;
while(n)
{
if (n & 1)
ret=(ret*A)%p;
A=(A*A)%p;
n>>=1;
}
return ret;
}
long long factorial[N];
void init(long long p)
{
factorial[0] = 1;
for(int i = 1;i <= p;i++)
factorial[i] = factorial[i-1]*i%p;
//for(int i = 0;i < p;i++)
//ni[i] = mod_pow(factorial[i],p-2,p);
}
long long Lucas(long long a,long long k,long long p)
{
long long re = 1;
while(a && k)
{
long long aa = a%p;long long bb = k%p;
if(aa < bb) return 0;
re = re*factorial[aa]*mod_pow(factorial[bb]*factorial[aa-bb]%p,p-2,p)%p;
a /= p;
k /= p;
}
return re;
}
int main()
{
int t;
cin >> t;
while(t--)
{
long long n,m,p;
cin >> n >> m >> p;
init(p);
cout << Lucas(n+m,m,p) << "\n";
}
return 0;
}
This solution assumes that p2 fits into an unsigned long long. Since an unsigned long long has at least 64 bits as per standard, this works at least for p up to 4 billion, much more than the question specifies.
typedef unsigned long long num;
/* x such that a*x = 1 mod p */
num modinv(num a, num p)
{
/* implement this one on your own */
/* you can use the extended Euclidean algorithm */
}
/* n chose m mod p */
/* computed with the theorem of Lucas */
num modbinom(num n, num m, num p)
{
num i, result, divisor, n_, m_;
if (m == 0)
return 1;
/* check for the likely case that the result is zero */
if (n < m)
return 0;
for (n_ = n, m_ = m; m_ > 0; n_ /= p, m_ /= p)
if (n_ % p < m_ % p)
return 0;
for (result = 1; n >= p || m >= p; n /= p, m /= p) {
result *= modbinom(n % p, m % p, p);
result %= p;
}
/* avoid unnecessary computations */
if (m > n - m)
m = n - m;
divisor = 1;
for (i = 0; i < m; i++) {
result *= n - i;
result %= p;
divisor *= i + 1;
divisor %= p;
}
result *= modinv(divisor, p);
result %= p;
return result;
}
An infinite precision integer seems like the way to go.
If you are in C++,
the PicklingTools library has an "infinite precision" integer (similar to
Python's LONG type). Someone else suggested Python, that's a reasonable
answer if you know Python. if you want to do it in C++, you can
use the int_n type:
#include "ocval.h"
int_n n="012345678910227836478627843";
n = n + 1; // Can combine with other plain ints as well
Take a look at the documentation at:
http://www.picklingtools.com/html/usersguide.html#c-int-n-and-the-python-arbitrary-size-ints-long
and
http://www.picklingtools.com/html/faq.html#c-and-otab-tup-int-un-int-n-new-in-picklingtools-1-2-0
The download for the C++ PicklingTools is here.
You want a bignum (a.k.a. arbitrary precision arithmetic) library.
First, don't write your own bignum (or bigint) library, because efficient algorithms (more efficient than the naive ones you learned at school) are difficult to design and implement.
Then, I would recommend GMPlib. It is free software, well documented, often used, quite efficient, and well designed (with perhaps some imperfections, in particular the inability to plugin your own memory allocator in replacement of the system malloc; but you probably don't care, unless you want to catch the rare out-of-memory condition ...). It has an easy C++ interface. It is packaged in most Linux distributions.
If it is a homework assignment, perhaps your teacher is expecting you to think more on the math, and find, with some proof, a way of solving the problem without any bignums.
Lets suppose that we need to compute a value of (a / b) mod p where p is a prime number. Since p is prime then every number b has an inverse mod p. So (a / b) mod p = (a mod p) * (b mod p)^-1. We can use euclidean algorithm to compute the inverse.
To get (n over k) we need to compute n! mod p, (k!)^-1, ((n - k)!)^-1. Total time complexity is O(n).
UPDATE: Here is the code in c++. I didn't test it extensively though.
int64_t fastPow(int64_t a, int64_t exp, int64_t mod)
{
int64_t res = 1;
while (exp)
{
if (exp % 2 == 1)
{
res *= a;
res %= mod;
}
a *= a;
a %= mod;
exp >>= 1;
}
return res;
}
// This inverse works only for primes p, it uses Fermat's little theorem
int64_t inverse(int64_t a, int64_t p)
{
assert(p >= 2);
return fastPow(a, p - 2, p);
}
int64_t binomial(int64_t n, int64_t k, int64_t p)
{
std::vector<int64_t> fact(n + 1);
fact[0] = 1;
for (auto i = 1; i <= n; ++i)
fact[i] = (fact[i - 1] * i) % p;
return ((((fact[n] * inverse(fact[k], p)) % p) * inverse(fact[n - k], p)) % p);
}

Generate all combinations in bit version

I'd like to generate all possible combination (without repetitions) in bit representation. I can't use any library like boost or stl::next_combination - it has to be my own code (computation time is very important).
Here's my code (modified from ones StackOverflow user):
int combination = (1 << k) - 1;
int new_combination = 0;
int change = 0;
while (true)
{
// return next combination
cout << combination << endl;
// find first index to update
int indexToUpdate = k;
while (indexToUpdate > 0 && GetBitPositionByNr(combination, indexToUpdate)>= n - k + indexToUpdate)
indexToUpdate--;
if (indexToUpdate == 1) change = 1; // move all bites to the left by one position
if (indexToUpdate <= 0) break; // done
// update combination indices
new_combination = 0;
for (int combIndex = GetBitPositionByNr(combination, indexToUpdate) - 1; indexToUpdate <= k; indexToUpdate++, combIndex++)
{
if(change)
{
new_combination |= (1 << (combIndex + 1));
}
else
{
combination = combination & (~(1 << combIndex));
combination |= (1 << (combIndex + 1));
}
}
if(change) combination = new_combination;
change = 0;
}
where n - all elements, k - number of elements in combination.
GetBitPositionByNr - return position of k-th bit.
GetBitPositionByNr(13,2) = 3 cause 13 is 1101 and second bit is on third position.
It gives me correct output for n=4, k=2 which is:
0011 (3 - decimal representation - printed value)
0101 (5)
1001 (9)
0110 (6)
1010 (10)
1100 (12)
Also it gives me correct output for k=1 and k=4, but gives me wrong outpu for k=3 which is:
0111 (7)
1011 (11)
1011 (9) - wrong, should be 13
1110 (14)
I guess the problem is in inner while condition (second) but I don't know how to fix this.
Maybe some of you know better (faster) algorithm to do want I want to achieve? It can't use additional memory (arrays).
Here is code to run on ideone: IDEONE
When in doubt, use brute force. Alas, generate all variations with repetition, then filter out the unnecessary patterns:
unsigned bit_count(unsigned n)
{
unsigned i = 0;
while (n) {
i += n & 1;
n >>= 1;
}
return i;
}
int main()
{
std::vector<unsigned> combs;
const unsigned N = 4;
const unsigned K = 3;
for (int i = 0; i < (1 << N); i++) {
if (bit_count(i) == K) {
combs.push_back(i);
}
}
// and print 'combs' here
}
Edit: Someone else already pointed out a solution without filtering and brute force, but I'm still going to give you a few hints about this algorithm:
most compilers offer some sort of intrinsic population count function. I know of GCC and Clang which have __builtin_popcount(). Using this intrinsic function, I was able to double the speed of the code.
Since you seem to be working on GPUs, you can parallelize the code. I have done it using C++11's standard threading facilities, and I've managed to compute all 32-bit repetitions for arbitrarily-chosen popcounts 1, 16 and 19 in 7.1 seconds on my 8-core Intel machine.
Here's the final code I've written:
#include <vector>
#include <cstdio>
#include <thread>
#include <utility>
#include <future>
unsigned popcount_range(unsigned popcount, unsigned long min, unsigned long max)
{
unsigned n = 0;
for (unsigned long i = min; i < max; i++) {
n += __builtin_popcount(i) == popcount;
}
return n;
}
int main()
{
const unsigned N = 32;
const unsigned K = 16;
const unsigned N_cores = 8;
const unsigned long Max = 1ul << N;
const unsigned long N_per_core = Max / N_cores;
std::vector<std::future<unsigned>> v;
for (unsigned core = 0; core < N_cores; core++) {
unsigned long core_min = N_per_core * core;
unsigned long core_max = core_min + N_per_core;
auto fut = std::async(
std::launch::async,
popcount_range,
K,
core_min,
core_max
);
v.push_back(std::move(fut));
}
unsigned final_count = 0;
for (auto &fut : v) {
final_count += fut.get();
}
printf("%u\n", final_count);
return 0;
}

To find combination value of large numbers

I want to find (n choose r) for large integers, and I also have to find out the mod of that number.
long long int choose(int a,int b)
{
if (b > a)
return (-1);
if(b==0 || a==1 || b==a)
return(1);
else
{
long long int r = ((choose(a-1,b))%10000007+(choose(a-1,b- 1))%10000007)%10000007;
return r;
}
}
I am using this piece of code, but I am getting TLE. If there is some other method to do that please tell me.
I don't have the reputation to comment yet, but I wanted to point out that the answer by rock321987 works pretty well:
It is fast and correct up to and including C(62, 31)
but cannot handle all inputs that have an output that fits in a uint64_t. As proof, try:
C(67, 33) = 14,226,520,737,620,288,370 (verify correctness and size)
Unfortunately, the other implementation spits out 8,829,174,638,479,413 which is incorrect. There are other ways to calculate nCr which won't break like this, however the real problem here is that there is no attempt to take advantage of the modulus.
Notice that p = 10000007 is prime, which allows us to leverage the fact that all integers have an inverse mod p, and that inverse is unique. Furthermore, we can find that inverse quite quickly. Another question has an answer on how to do that here, which I've replicated below.
This is handy since:
x/y mod p == x*(y inverse) mod p; and
xy mod p == (x mod p)(y mod p)
Modifying the other code a bit, and generalizing the problem we have the following:
#include <iostream>
#include <assert.h>
// p MUST be prime and less than 2^63
uint64_t inverseModp(uint64_t a, uint64_t p) {
assert(p < (1ull << 63));
assert(a < p);
assert(a != 0);
uint64_t ex = p-2, result = 1;
while (ex > 0) {
if (ex % 2 == 1) {
result = (result*a) % p;
}
a = (a*a) % p;
ex /= 2;
}
return result;
}
// p MUST be prime
uint32_t nCrModp(uint32_t n, uint32_t r, uint32_t p)
{
assert(r <= n);
if (r > n-r) r = n-r;
if (r == 0) return 1;
if(n/p - (n-r)/p > r/p) return 0;
uint64_t result = 1; //intermediary results may overflow 32 bits
for (uint32_t i = n, x = 1; i > r; --i, ++x) {
if( i % p != 0) {
result *= i % p;
result %= p;
}
if( x % p != 0) {
result *= inverseModp(x % p, p);
result %= p;
}
}
return result;
}
int main() {
uint32_t smallPrime = 17;
uint32_t medNum = 3001;
uint32_t halfMedNum = medNum >> 1;
std::cout << nCrModp(medNum, halfMedNum, smallPrime) << std::endl;
uint32_t bigPrime = 4294967291ul; // 2^32-5 is largest prime < 2^32
uint32_t bigNum = 1ul << 24;
uint32_t halfBigNum = bigNum >> 1;
std::cout << nCrModp(bigNum, halfBigNum, bigPrime) << std::endl;
}
Which should produce results for any set of 32-bit inputs if you are willing to wait. To prove a point, I've included the calculation for a 24-bit n, and the maximum 32-bit prime. My modest PC took ~13 seconds to calculate this. Check the answer against wolfram alpha, but beware that it may exceed the 'standard computation time' there.
There is still room for improvement if p is much smaller than (n-r) where r <= n-r. For example, we could precalculate all the inverses mod p instead of doing it on demand several times over.
nCr = n! / (r! * (n-r)!) {! = factorial}
now choose r or n - r in such a way that any of them is minimum
#include <cstdio>
#include <cmath>
#define MOD 10000007
int main()
{
int n, r, i, x = 1;
long long int res = 1;
scanf("%d%d", &n, &r);
int mini = fmin(r, (n - r));//minimum of r,n-r
for (i = n;i > mini;i--) {
res = (res * i) / x;
x++;
}
printf("%lld\n", res % MOD);
return 0;
}
it will work for most cases as required by programming competitions if the value of n and r are not too high
Time complexity :- O(min(r, n - r))
Limitation :- for languages like C/C++ etc. there will be overflow if
n > 60 (approximately)
as no datatype can store the final value..
The expansion of nCr can always be reduced to product of integers. This is done by canceling out terms in denominator. This approach is applied in the function given below.
This function has time complexity of O(n^2 * log(n)). This will calculate nCr % m for n<=10000 under 1 sec.
#include <numeric>
#include <algorithm>
int M=1e7+7;
int ncr(int n, int r)
{
r=min(r,n-r);
int A[r],i,j,B[r];
iota(A,A+r,n-r+1); //initializing A starting from n-r+1 to n
iota(B,B+r,1); //initializing B starting from 1 to r
int g;
for(i=0;i<r;i++)
for(j=0;j<r;j++)
{
if(B[i]==1)
break;
g=__gcd(B[i], A[j] );
A[j]/=g;
B[i]/=g;
}
long long ans=1;
for(i=0;i<r;i++)
ans=(ans*A[i])%M;
return ans;
}

Print long long via fast i/o

The following code is used to print an int. How can I modify it to print a long long int? Please explain.
For pc, read putchar_unlocked
inline void writeInt (int n)
{
int N = n, rev, count = 0;
rev = N;
if (N == 0) { pc('0'); pc('\n'); return ;}
while ((rev % 10) == 0) { count++; rev /= 10;}
rev = 0;
while (N != 0) { rev = (rev<<3) + (rev<<1) + N % 10; N /= 10;}
while (rev != 0) { pc(rev % 10 + '0'); rev /= 10;}
while (count--) pc('0');
pc('\n');
return ;
}
There's nothing specific about int in the code. Just replace both occurrences of "int" by "long long int", and you're done.
(I find the "optimization" of *10 via shift and add quite ridiculous with all the divisions that remain. Any decent C compiler will do that (and much more) automatically. And don't forget to profile this "fast" version against the stdlib routine, to be sure it really was worth the effort).
This code is a lit more complex than it needs to be:
inline void writeLongLong (long long n)
{
char buffer[sizeof(n) * 8 * 3 / 10 + 3]; // 3 digits per 10 bits + two extra and space for terminating zero.
int index = sizeof(buffer)-1;
int end = index;
buffer[index--] = 0;
do {
buffer[index--] = (n % 10) + '0';
n /= 10;
} while(n);
puts(&buffer[index+1]);
}
This does the same job, with about half as many divide/modulo operations and at least I can follow it better. Note that stdio/stdlib functions are probably better than this, and this function does not cope with negative numbers (neither does the one posted above).

How to reduce time complexity for large data set inputs in this program?

I have written this code to calculate the number of set bits between the range of numbers. My program gets compiled fine and giving proper output. It is taking too much time for large inputs and "Time limit exceeding".
#define forn(i, n) for(long int i = 0; i < (long int)(n); i++)
#define ford(i, n) for(long int i = (long int)(n) - 1; i >= 0; i--)
#define fore(i, a, n) for(long int i = (int)(a); i < (long int)(n); i++)
long int solve(long int i) {
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
int main() {
freopen("C:/Projects/CodeChef/SetBits/input.txt", "rt", stdin);
freopen("C:/Projects/CodeChef/SetBits/output.txt", "wt", stdout);
int tt;
long long int num1;
long long int num2;
scanf("%d", &tt);
forn(ii, tt) {
unsigned long int bits = 0;
unsigned long long int total_bits = 0;
scanf("%lld",&num1);
scanf("%lld",&num2);
fore(jj, num1, num2+1) {
bits = solve(jj);
total_bits += bits;
}
printf("%lld\n",total_bits);
}
return 0;
}
Example test case:-
Sample Input:
3
-2 0
-3 4
-1 4
Sample Output:
63
99
37
For the first case, -2 contains 31 1's followed by a 0, -1 contains 32 1's and 0 contains 0 1's. Thus the total is 63.
For the second case, the answer is 31 + 31 + 32 + 0 + 1 + 1 + 2 + 1 = 99
Test case having large values:-
10
-1548535525 662630637
-1677484556 -399596060
-2111785037 1953091095
643110128 1917824721
-1807916951 491608908
-1536297104 1976838237
-1891897587 -736733635
-2088577104 353890389
-2081420990 819160807
-1585188028 2053582020
Any suggestions on how to optimize the code so that it will take less time. All helpful suggestions and answers will be appreciated with vote up. :)
I don't really have a clue what you are doing, but I do know you can clean up your code considerable, and you can inline you function.
Also I have taken the liberty of 'rephrasing' you code, you are using C++ like C and those defines are just grim and mapping the files onto stdio is even worse. I haven't tested or compiled the code but it is all there.
#include <fstream>
inline long int solve(long int i) {
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
int main() {
long first, last;
unsigned count;
std::ifstream inf("C:/Projects/CodeChef/SetBits/input.txt");
std::ofstream off("C:/Projects/CodeChef/SetBits/output.txt");
inf >> count;
for(unsigned i=0u; i!=count; ++i) {
inf >> first >> last;
long total=0;
++last;
for(long t=first; t!=last; ++t) {
total+=solve(t);
}
off << total << '\n';
}
return 0;
}
A few ideas as to how you could be speed this up:
you could build a std::map of the computed values and if they have been previously processed then use them rather than recomputing.
do the same but store ranges rather than single values but that will be tricky.
you could see if a value exists in the map and increment through the map until there are no more preprocessed values in which case start processing them for the iteration.
check if there is a trivial sequence between on number and the next may be you could work out the first value then just increment it.
may there is a O(1) algorithm for such a sequence
Look at intel TBB and using something like tbb::parallel for to distribute the work over each core, because you are dealing with such a small about or memory then you should get a really good return with a large chunk size.