Get 2 unique numbers in a row? - c++

I want to make sure 'grid' can't return 2 same values, but I'm not sure how. Here's my code:
grid[rnd(2,x-2) * y + rnd(2,y-2)].height = rnd(25,40);
int rnd(int min, int max) {
return min + rand() % (max - min + 1);
}
I also seeded rand() with srand(time(NULL));
I wish I could provide more details or what I tried, but I couldn't quite find anything related to this topic.
EDIT: I could of course do re-randoming, but I feel like it's bad practice :/

If you really need to avoid consecutive repeats,1 all you need to do is pass the previous value into your function, and then generate random numbers in a loop until it is distinct.
Pseudo-code:
int rnd(..., int prev) {
int y;
do {
y = rand() ...;
} while (y == prev);
return y;
}
Note that you could alternatively maintain prev as a static variable inside the function. But this would render it incapable of generating multiple independent streams simultaneously.
1. Which actually makes things less "random", in the sense of becoming more predictable.

Related

Return non-duplicate random values from a very large range

I would like a function that will produce k pseudo-random values from a set of n integers, zero to n-1, without repeating any previous result. k is less than or equal to n. O(n) memory is unacceptable because of the large size of n and the frequency with which I'll need to re-shuffle.
These are the methods I've considered so far:
Array:
Normally if I wanted duplicate-free random values I'd shuffle an array, but that's O(n) memory. n is likely to be too large for that to work.
long nextvalue(void) {
static long array[4000000000];
static int s = 0;
if (s == 0) {
for (int i = 0; i < 4000000000; i++) array[i] = i;
shuffle(array, 4000000000);
}
return array[s++];
}
n-state PRNG:
There are a variety of random number generators that can be designed so as to have a period of n and to visit n unique states over that period. The simplest example would be:
long nextvalue(void) {
static long s = 0;
static const long i = 1009; // assumed co-prime to n
s = (s + i) % n;
return s;
}
The problem with this is that it's not necessarily easy to design a good PRNG on the fly for a given n, and it's unlikely that that PRNG will approximate a fair shuffle if it doesn't have a lot of variable parameters (even harder to design). But maybe there's a good one I don't know about.
m-bit hash:
If the size of the set is a power of two, then it's possible to devise a perfect hash function f() which performs a 1:1 mapping from any value in the range to some other value in the range, where every input produces a unique output. Using this function I could simply maintain a static counter s, and implement a generator as:
long nextvalue(void) {
static long s = 0;
return f(s++);
}
This isn't ideal because the order of the results is determined by f(), rather than random values, so it's subject to all the same problems as above.
NPOT hash:
In principle I can use the same design principles as above to define a version of f() which works in an arbitrary base, or even a composite, that is compatible with the range needed; but that's potentially difficult, and I'm likely to get it wrong. Instead a function can be defined for the next power of two greater than or equal to n, and used in this construction:
long nextvalue(void) {
static long s = 0;
long x = s++;
do { x = f(x); } while (x >= n);
}
But this still have the same problem (unlikely to give a good approximation of a fair shuffle).
Is there a better way to handle this situation? Or perhaps I just need a good function for f() that is highly parameterisable and easy to design to visit exactly n discrete states.
One thing I'm thinking of is a hash-like operation where I contrive to have the first j results perfectly random through carefully designed mapping, and then any results between j and k would simply extrapolate on that pattern (albeit in a predictable way). The value j could then be chosen to find a compromise between a fair shuffle and a tolerable memory footprint.
First of all, it seems unreasonable to discount anything that uses O(n) memory and then discuss a solution that refers to an underlying array. You have an array. Shuffle it. If that doesn't work or isn't fast enough, come back to us with a question about it.
You only need to perform a complete shuffle once. After that, draw from index n, swap that element with an element located randomly before it and increase n, modulo element count. For example, with such a large dataset I'd use something like this.
Prime numbers are an option for hashes, but probably not the same way you think. Using two Mersenne primes (low and high, perhaps 0xefff and 0xefffffff) you should be able to come up with a much more general-purpose hashing algorithm.
size_t hash(unsigned char *value, size_t value_size, size_t low, size_t high) {
size_t x = 0;
while (value_size--) {
x += *value++;
x *= low;
}
return x % high;
}
#define hash(value, value_size, low, high) (hash((void *) value, value_size, low, high))
This should produce something fairly well distributed for all inputs larger than about two octets for example, with the minor troublesome exception for zero byte prefixes. You might want to treat those differently.
So... what I've ended up doing is digging deeper into pre-existing methods to
try to confirm their ability to approximate a fair shuffle.
I take a simple counter, which itself is guaranteed to visit
every in-range value exactly once, and then 'encrypt' it with an n-bit block
cypher. Rather, I round the range up to a power of two, and apply some 1:1
function; then if the result is out of range I repeat the permutation until the
result is in range.
This can be guaranteed to complete eventually because there are only a finite
number of out-of-range values within the power-of-two range, and they cannot
enter into a always-out-of-range cycle because that would imply that something
in the cycle was mapped from two different previous states (one from the
in-range set, and another from the out-of-range set), which would make the
function not bijective.
So all I need to do is devise a parameterisable function which I can tune to an
arbitrary number of bits. Like this one:
uint64_t mix(uint64_t x, uint64_t k) {
const int s0 = BITS * 4 / 5;
const int s1 = BITS / 5 + (k & 1);
const int s2 = BITS * 2 / 5;
k |= 1;
x *= k;
x ^= (x & BITMASK) >> s0;
x ^= (x << s1) & BITMASK;
x ^= (x & BITMASK) >> s2;
x += 0x9e3779b97f4a7c15;
return x & BITMASK;
}
I know it's bijective because I happen to have its inverse function handy:
uint64_t unmix(uint64_t x, uint64_t k) {
const int s0 = BITS * 4 / 5;
const int s1 = BITS / 5 + (k & 1);
const int s2 = BITS * 2 / 5;
k |= 1;
uint64_t kp = k * k;
while ((kp & BITMASK) > 1) {
k *= kp;
kp *= kp;
}
x -= 0x9e3779b97f4a7c15;
x ^= ((x & BITMASK) >> s2) ^ ((x & BITMASK) >> s2 * 2);
x ^= (x << s1) ^ (x << s1 * 2) ^ (x << s1 * 3) ^ (x << s1 * 4) ^ (x << s1 * 5);
x ^= (x & BITMASK) >> s0;
x *= k;
return x & BITMASK;
}
This allows me to define a simple parameterisable PRNG like this:
uint64_t key[ROUNDS];
uint64_t seed = 0;
uint64_t rand_no_rep(void) {
uint64_t x = seed++;
do {
for (int i = 0; i < ROUNDS; i++) x = mix(x, key[i]);
} while (x >= RANGE);
return x;
}
Initialise seed and key to random values and you're good to go.
Using the inverse function to lets me determine what seed must be to force
rand_no_rep() to produce a given output; making it much easier to test.
So far I've checked the cases where constant a, it is followed by constant
b. For ROUNDS==1 pairs collide on exactly 50% of the keys (and each
pair of collisions is with a different pair of a and b; they don't all converge on 0, 1 or whatever). That is, for
various k, a specific a-followed-by-b cases occurs for more than one k
(this must happen at least one). Subsequent values values do not collide in
that case, so different keys aren't falling into the same cycle at different
positions. Every k gives a unique cycle.
50% collisions comes from 25% being not unique when they're added to the list (count itself, and count the guy it ran into). That might sound bad but it's actually lower than birthday paradox logic would suggest. Selecting randomly, the percentage of new entries that fail to be unique looks to converge between 36% and 37%. Being "better than random" is obviously worse than random, as far as randomness goes, but that's why they're called pseudo-random numbers.
Extending that to ROUNDS==2, we want to make sure that a second round doesn't
cancel out or simply repeat the effects of the first.
This is important because it would mean that multiple rounds are a waste of
time and memory, and that the function cannot be paramaterised to any
substantial degree. It could happen trivially if mix() contained all linear
operations (say, multiply and add, mod RANGE). In that case all of the
parameters could be multiplied/added together to produce a single parameter for
a single round that would have the same effect. That would be disappointing,
as it would reduce the number of attainable permutations to the size of just
that one parameter, and if the set is as small as that then more work would be
needed to ensure that it's a good, representative set.
So what we want to see from two rounds is a large set of outcomes that could
never be achieved by one round. One way to demonstrate this is to look for the
original b-follows-a cases with an additional parameter c, where we want
to see every possible c following a and b.
We know from the one-round testing that in 50% of cases there is only one c
that can follow a and b because there is only one k that places b
immediately after a. We also know that 25% of the pairs of a and b were
unreachable (being the gap left behind by half the pairs that went into
collisions rather than new unique values), and the last 25% appear for two
different k.
The result that I get is that given a free choice of both keys, it's possible
to find about five eights of the values of c following a given a and b.
About a quarter of the a/b pairs are unreachable (it's a less predictable,
now, because of the potential intermediate mappings into or out of the
duplicate or unreachable cases) and a quarter have a, b, and c appear
together in two sequences (which diverge afterwards).
I think there's a lot to be inferred from the difference between one round and
two, but I could be wrong about that and I need to double-check. Further
testing gets harder; or at least slower unless I think more carefully about how
I'm going to do it.
I haven't yet demonstrated that amongst the set of permutations it can produce, that they're all equally likely; but this is normally not guaranteed for any other PRNG either.
It's fairly slow for a PRNG, but it would fit SIMD trivially.

Sum the odd positioned and the even positioned integers in an array

What is the most elegant way to sum 'each number on odd position' with 'each number on even position multiplied by 3'? I must obide this prototype
int computeCheckSum(const int* d)
My first try was to use this but my idea was flawed. I can't find a way to tell which element is even this way.
int sum=0;
for_each(d,
d+11,
[&sum](const int& i){sum+=(i%2==1)?3*i:i;}
);
example
1 2 3 4 5
1+2*3+3+4*3+5=27
I can't find a way to tell which element is even this way.
If you insist on using for_each (there's no reason to do that here), then you track the index separately:
int computeCheckSum(const int* d, int count)
{
int sum=0;
int pos=1;
std::for_each(d, d+count,
[&sum,&pos](const int& value) { sum += pos++ % 2 ? value : value * 3; } );
return sum;
}
Note I added a count parameter, so the function can work on arrays of any length. If you're feeling really perverse, you can remove that parameter and go back to hardcoding the length so the function only works arrays with 12 elements. But if you hope to be good at this some day, doing that should make you feel gross.
These things rarely become very "elegant" in C++ (it seems C++ is asymptotically approaching Perl on the "line noise" index) but since accumulate is a left fold, you can pass the index "along the fold":
int sum = std::accumulate(d,
d + 11,
std::make_pair(0,0), // (index, result)
[](std::pair<int, int> r, int x) {
r.second += r.first % 2 ? x : 3 * x;
r.first++;
return r;
}).second;
You were right. As Mud said, it was just a terrible function design. This is what I needed.
int computeCheckSum(){
int sum = 0;
bool multiplyBy3 = false;
for (auto i : m_digits){
sum += multiplyBy3 ? 3*i : i;
multiplyBy3 = !multiplyBy3;
}
return sum;
}
Mud's solution is correct using my flawed design. A simple for loop would probably be even a better solution, as everyone said.

How to trace error with counter in do while loop in C++?

I am trying to get i to read array with numbers and get the smaller number, store it in variable and then compare it with another variable that is again from two other numbers (like 2,-3).
There is something wrong in the way I implement the do while loop. I need the counter 'i' to be updated twice so it goes through I have 2 new variables from 4 compared numbers. When I hard code it n-1,n-2 it works but with the loop it gets stuck at one value.
int i=0;
int closestDistance=0;
int distance=0;
int nextDistance=0;
do
{
distance = std::min(values[n],values[n-i]); //returns the largest
distance=abs(distance);
i++;
nextDistance=std::min(values[n],values[n-i]);
nextDistance=abs(closestDistance); //make it positive then comp
if(distance<nextDistance)
closestDistance=distance;//+temp;
else
closestDistance=nextDistance;
i++;
}
while(i<n);
return closestDistance;
Maybe this:
int i = 0;
int m = 0;
do{
int lMin = std::min(values[i],values[i + 1]);
i += 2;
int rMin = std::min(values[i], values[i + 1]);
m = std::min(lMin,rMin);
i += 2;
}while(i < n);
return m;
I didn't understand what you meant, but this compares values in values 4 at a time to find the minimal. Is that all you needed?
Note that if n is the size of values, this would go out of bounds. n would have to be the size minus 4, leading to odd exceptional cases.
The issue with your may be in the call to abs. Are all the values positive? Are you trying to find the smallest absolute value?
Also, note that using i += 2 twice ensures that you do not repeat any values. This means that you will go over 4 unique values. Your code goes through 3 in each iteration of the loop.
I hope this clarified.
What are you trying to do in following lines.
nextDistance=std::min(values[n],values[n-i]);
nextDistance=abs(closestDistance); //make it positive , then computed

C++ Adding big numbers together with operator overload

I am new to C++ and attempting to create a "BigInt" class. I decided to base most of the implementation on reading the numbers into vectors.
So far I have only written the copy constructor for an input string.
Largenum::Largenum(std::string input)
{
for (std::string::const_iterator it = input.begin(); it!=input.end(); ++it)
{
number.push_back(*it- '0');
}
}
The problem I am having is with the addition function. I have created a function which seems to work after I tested it a few times, but as you can see its highly inefficient. I have 2 different vectors such as:
std::vector<int> x = {1,3,4,5,9,1};
std::vector<int> y = {2,4,5,6};
The way I thought to solve this problem was to add 0s before the shorter, in this case y vector to make both vectors have the same size such as:
x = {1,3,4,5,9,1};
y = {0,0,2,4,5,6};
Then to add them using elementary style addition.
I don't want to add 0s infront of vector Y as it would be slow with a large number. My current solution is to reverse the vector, then push_back the appropriate amount of 0s, then reverse it back. This may be slower then simply inserting at the front it seems, I have not tested yet.
The problem is that after I do all of the addition on the vectors and push_back the result. I am left with a backward vector and I need to use reverse yet again! There has got to be a much better way then my method but I am stuck on finding it. Ideally I would make A const as well. Here is the code of the function:
Largenum Largenum::operator+(Largenum &A)
{
bool carry = 0;
Largenum sum;
std::vector<int>::size_type max = std::max(A.number.size(), this->number.size());
std::vector<int>::size_type diff = std::abs (A.number.size()-this->number.size());
if (A.number.size()>this->number.size())
{
std::reverse(this->number.begin(), this->number.end());
for (std::vector<int>::size_type i = 0; i<(max-diff); ++i) this->number.push_back(0);
std::reverse(this->number.begin(), this->number.end());
}
else if (this->number.size() > A.number.size())
{
std::reverse(A.number.begin(), A.number.end());
for (std::vector<int>::size_type i = 0; i<(max-diff); ++i) A.number.push_back(0);
std::reverse(A.number.begin(), A.number.end());
}
for (std::vector<int>::size_type i = max; i!=0; --i)
{
int num = (A.number[i-1] + this->number[i-1] + carry)%10;
sum.number.push_back(num);
(A.number[i-1] + this->number[i-1] + carry >= 10) ? carry = 1 : carry = 0;
}
if (carry) sum.number.push_back(1);
reverse(sum.number.begin(), sum.number.end());
return sum;
}
If anyone has any input that would be great, this is my first program using classes in C++ and its fairly overwhelming.
I think your function is quite close to the most optimal one I have seen. Still here are few suggestions how to improve it:
Decimal numeric system is quite inefficient, you have a lot of digits for big numbers. Better use a higher base to reduce the number of digits you have to add. Reading and writing such numbers in human readable representation will be a bit harder, but you will optimize the operations several times, because you will have less digits.
When implementing big integers I represent them in reverse order, thus I have the least significant digit at position with index 0, and the most significant one at the end of the array. This way when carry forces you to add a new digit you only perform a push_back, not a whole reverse.
One issue: integer modulus is pretty slow on modern processors, even compared to branch misprediction. Rather than doing an explicit %10, try this for your third for-loop:
int num = A.number[i-1] + this->number[i-1] + carry;
if(num >= 10)
{
carry = 1;
num -= 10;
}
else
{
carry = 0;
}
sum.number.push_back(num);

Random Number Generator (rand) isn't Random?

I'm generating mazes, I can only pick the first or last column. And I can only choose an even row (if you start the row index at 1). I have the correct logic, except that the maze start position isn't random. If I generate 50 mazes, that are all 20x20, all starting positions will be the same. Any help will be appreciated. Thanks.
void Maze::generatePath()
{
int startingColumn = -1;
int startingRow = -1;
srand (time(NULL));
int side = rand() % 2;
startingColumn = (width - 1) * side; // will get first or last column
int row = rand() % height; // 0 -> height - 1
if(row % 2 == 0) // even, add one, or subtract if last row
{
if(row + 1 >= width)
startingRow = row - 1;
else
startingRow = row + 1;
}
else
{
startingRow = row; // odd, keep it
}
grid[startingRow][startingColumn] = ' '; // starting character is blank
}
I call this method every time I generate a new maze. This code is to get the starting position. I haven't written the rest.
Only call srand once when your program starts. By calling it over and over in the same second, you keep resetting the random number generator to the same state.
If you need better randomness you might use as random seed something better than time(NULL).
It may be for example /dev/random(or more practically /dev/urandom) device on unix-like systems.
For really hardcore cases some real radnomness based on physical phenomena might be desired.
For instance this one: http://photonics.anu.edu.au/qoptics/Research/qrng.php