Related
I was wondering if there is an algorithm to generate all the possible combinations of a binary number where only bits in certain positions can change, for example, we have the following bitstream, but only bits marked in the position x can change(this example has 8 places that can change to make different combinations, a total of 2^8):
x00x0000x000000x00x00x000x000x
one solution is to look at the number as an 8-bit number in the first place and just calculate all the combinations for xxxxxxxx.
However, this doesn't quite satisfy my needs, as I want to use the number in a Linear shift register (LFSR) later on, currently, I'm looking for an answer which utilizes std::bitset.
So, there is alsready an answer. But just dumped code, without any explanation. Not good. Anyway...
I would like to add an answer with a different approach, and explain the steps.
Basically, if you want to have all combinations of a binary number, then you could simply "count" or "increment by one". Example for a 3 bit value. This would be decimal 0, 1, 2, 3, 4, 5, 6, 7 and binary 000, 001, 010, 011, 100, 101, 110, 111. You see that it is simple counting.
If we think far back to school days, where we learned boolean algebra and a little bit of automata theory, then we rember how this counting operation is done on low level. We always flip the least significant bit, and, if there is a transition from 1 to 0, then we basically had an overflow and must also flip the next bit. That's the principle of a binary adder. We want to add always 1 in our example. So, add 1 to 0, result is 1, then no overflow. But add 1 to 1, result is 0, then we we have an overlow and must add 1 to the next bit. This will effectively flip the next bit, and so on and so on.
The advantage of this method is, that we do not always need to operate on all bits. So, the complexity is not O(n), but rather O(log n).
Additional advantage: It fits very good to your request for using a std::bitset.
3rd advantage, and maybe not that obvious: You can decouple the task of calculating the next combination from the rest of your program. No need to integrate your real-task-code in such a function. That is also the reason, why std::next_permutation is implemented like this.
And, the algorithms desribed above works for all values, no sorting or something necessary.
That part was for the algorithm you have asked for.
Next part is for your request that only certain bits can change. Of course, we need to specify these bits. And because your are working with std::bitset masking is no solution here. The better approach is to use indices. Meaning, give the bit positions of the bits that are allowed to be changed.
And then we can use the above described algorithm, with just one additional indirection. So, we do not use bits[pos], but bits[index[pos]].
The indices can easily be stored in a std::vector using an initializer list. We can also derive the indices-vector from a string or whatever. I used a std::string as example.
All the above will result in some short / compact code, with just a few lines and is easy to understand. I also added some driver code that makes use of this function.
Please see:
#include <iostream>
#include <vector>
#include <string>
#include <bitset>
#include <algorithm>
#include <cassert>
constexpr size_t BitSetSize = 32U;
void nextCombination(std::bitset<BitSetSize>& bits, const std::vector<size_t>& indices) {
for (size_t i{}; i < indices.size(); ++i) {
// Get the current index, and check, if it is valid
if (const size_t pos = indices[i]; pos < BitSetSize) {
// Flip bit at lowest positions
bits[pos].flip();
// If there is no transition of the just flipped bit, then stop
// If there is a transition from high to low, then we need to flip the next bit
if (bits.test(pos))
break;
}
}
}
// Some driver code
int main() {
// Use any kind of mechanism to indicate which index should be changed or not
std::string mask{ "x00x0000x000000x00x00x000x000x" };
// Here, we will store the indices
std::vector<size_t> index{};
// Populated the indices vector from the string
std::for_each(mask.crbegin(), mask.crend(), [&, i = 0U](const char c) mutable {if ('x' == c) index.push_back(i); ++i; });
// The bitset, for which we want to calculate the combinations
std::bitset<BitSetSize> bits(0);
// Play around
for (size_t combination{}; combination < (1 << (index.size())); ++combination) {
// This is the do something
std::cout << bits.to_string() << '\n';
// calculate the next permutation
nextCombination(bits, index);
}
return 0;
}
This software has been compiled wth MSVC 19 Community Edition using C++17
If you should have additional questions or need more clarifications, then I am happy to answer
The integers that satisfy the pattern can be enumerated by iterating with a "masked increment", which increments the variable bits but leaves the fixed bits the same. For convenience I will assume that the "fixed bits" are zero, but it they weren't it would still work with minor changes. mask is 1 for the fixed bits and 0 for the variable bits.
uint32_t x = 0;
do {
// use x
...
// masked increment
x = (x | mask) + 1 & ~mask;
} while (x != 0);
x | mask sets the fixed bits, so that the carry will "go through" the fixed bits. +1 increments the variable bits. &~mask cleans up the extra bits that were set, turning the fixed bits back into zeroes.
std::bitset cannot be incremented so it is difficult to use directly, but integers can be converted to std::bitset if necessary.
Something like this
// sample indexes
static const int indexes[8] = { 0, 4, 8, 11, 13, 16, 22, 25 };
std::bitset<32> clear_bit_n(std::bitset<32> number, int n)
{
return number.reset(indexes[n]);
}
std::bitset<32> set_bit_n(std::bitset<32> number, int n)
{
return number.set(indexes[n]);
}
void all_combinations(std::bitset<32> number, int n)
{
if (n == 8)
{
// do something with number
}
else
{
all_combinations(clear_bit_n(number, n), n + 1);
all_combinations(set_bit_n(number, n), n + 1);
}
}
all_combinations(std::bitset<32>(), 0);
Ok, I really don't know how to frame the question properly because I barely have any idea how to describe what I want in one sentence and I apologize.
Let me get straight to the point and you can just skip the rest cause I just want to show that I've tried something and not coming here to ask a question on a whim.
I need an algorithm that produces 6 random numbers where it may not produce more than 2 consecutive numbers in that sequence.
example: 3 3 4 4 2 1
^FINE.
example: 3 3 3 4 4 2
^NO! NO! WRONG!
Obviously, I have no idea how to do this without tripping over myself constantly.
Is there a STL or Boost feature that can do this? Or maybe someone here knows how to concoct an algorithm for it. That would be awesome.
What I'm trying to do and what I've tried.(the part you can skip)
This is in C++. I'm trying to make a Panel de Pon/Tetris Attack/Puzzle League whatever clone for practice. The game has a 6 block row and 3 or more matching blocks will destroy the blocks. Here's a video in case you're not familiar.
When a new row comes from the bottom it must not come out with 3 horizontal matching blocks or else it will automatically disappear. Something I do not want for horizontal. Vertical is fine though.
I've tried to accomplish just that and it appears I can't get it right. When I start the game chunks of blocks are missing because it detects a match when it shouldn't. My method is more than likely heavy handed and too convoluted as you'll see.
enum BlockType {EMPTY, STAR, UP_TRIANGLE, DOWN_TRIANGLE, CIRCLE, HEART, DIAMOND};
vector<Block> BlockField::ConstructRow()
{
vector<Block> row;
int type = (rand() % 6)+1;
for (int i=0;i<6;i++)
{
row.push_back(Block(type));
type = (rand() % 6) +1;
}
// must be in order from last to first of the enumeration
RowCheck(row, diamond_match);
RowCheck(row, heart_match);
RowCheck(row, circle_match);
RowCheck(row, downtriangle_match);
RowCheck(row, uptriangle_match);
RowCheck(row, star_match);
return row;
}
void BlockField::RowCheck(vector<Block> &row, Block blockCheckArray[3])
{
vector<Block>::iterator block1 = row.begin();
vector<Block>::iterator block2 = row.begin()+1;
vector<Block>::iterator block3 = row.begin()+2;
vector<Block>::iterator block4 = row.begin()+3;
vector<Block>::iterator block5 = row.begin()+4;
vector<Block>::iterator block6 = row.begin()+5;
int bt1 = (*block1).BlockType();
int bt2 = (*block2).BlockType();
int bt3 = (*block3).BlockType();
int bt4 = (*block4).BlockType();
int type = 0;
if (equal(block1, block4, blockCheckArray))
{
type = bt1 - 1;
if (type <= 0) type = 6;
(*block1).AssignBlockType(type);
}
else if (equal(block2, block5, blockCheckArray))
{
type = bt2 - 1;
if (type <= 0) type = 6;
(*block2).AssignBlockType(type);
}
else if (equal(block3, block6, blockCheckArray))
{
type = bt3 - 1;
if (type == bt3) type--;
if (type <= 0) type = 6;
(*block3).AssignBlockType(type);
}
else if (equal(block4, row.end(), blockCheckArray))
{
type = bt4 - 1;
if (type == bt3) type--;
if (type <= 0) type = 6;
(*block4).AssignBlockType(type);
}
}
Sigh, I'm not sure if it helps to show this...At least it shows that I've tried something.
Basically, I construct the row by assigning random block types, described by the BlockType enum, to a Block object's constructor(a Block object has blockType and a position).
Then I use a RowCheck function to see if there's 3 consecutive blockTypes in one row and I have do this for all block types. The *_match variables are arrays of 3 Block objects with the same block type. If I do find that there are 3 consecutive block types then, I just simply subtract the first value by one. However if I do that I might end up inadvertently producing another 3 match so I just make sure the block types are going in order from greatest to least.
Ok, it's crappy, it's convoluted and it doesn't work! That's why I need your help.
It should suffice to keep record of the previous two values, and loop when the newly generated one matches both of the previous values.
For an arbitrary run length, it would make sense to size a history buffer on the fly and do the comparisons in a loop as well. But this should be close to matching your requirements.
int type, type_old, type_older;
type_older = (rand() % 6)+1;
row.push_back(Block(type_older));
type_old = (rand() % 6)+1;
row.push_back(Block(type_old));
for (int i=2; i<6; i++)
{
type = (rand() % 6) +1;
while ((type == type_old) && (type == type_older)) {
type = (rand() % 6) +1;
}
row.push_back(Block(type));
type_older = type_old;
type_old = type;
}
Idea no 1.
while(sequence doesn't satisfy you)
generate a new sequence
Idea no 2.
Precalculate all allowable sequences (there are about ~250K of them)
randomly choose an index and take that element.
The second idea requires much memory, but is fast. The first one isn't slow either because there is a veeery little probability that your while loop will iterate more than once or twice. HTH
Most solutions seen so far involve a potentially infinite loop. May I suggest a different approch?
// generates a random number between 1 and 6
// but never the same number three times in a row
int dice()
{
static int a = -2;
static int b = -1;
int c;
if (a != b)
{
// last two were different, pick any of the 6 numbers
c = rand() % 6 + 1;
}
else
{
// last two were equal, so we need to choose from 5 numbers only
c = rand() % 5;
// prevent the same number from being generated again
if (c == b) c = 6;
}
a = b;
b = c;
return c;
}
The interesting part is the else block. If the last two numbers were equal, there is only 5 different numbers to choose from, so I use rand() % 5 instead of rand() % 6. This call could still produce the same number, and it also cannot produce the 6, so I simply map that number to 6.
Solution with simple do-while loop (good enough for most cases):
vector<Block> row;
int type = (rand() % 6) + 1, new_type;
int repetition = 0;
for (int i = 0; i < 6; i++)
{
row.push_back(Block(type));
do {
new_type = (rand() % 6) + 1;
} while (repetition == MAX_REPETITION && new_type == type);
repetition = new_type == type ? repetition + 1 : 0;
type = new_type;
}
Solution without loop (for those who dislike non-deterministic nature of previous solution):
vector<Block> row;
int type = (rand() % 6) + 1, new_type;
int repetition = 0;
for (int i = 0; i < 6; i++)
{
row.push_back(Block(type));
if (repetition != MAX_REPETITION)
new_type = (rand() % 6) + 1;
else
{
new_type = (rand() % 5) + 1;
if (new_type >= type)
new_type++;
}
repetition = new_type == type ? repetition + 1 : 0;
type = new_type;
}
In both solutions MAX_REPETITION is equal to 1 for your case.
How about initializing a six element array to [1, 2, 3, 4, 5, 6] and randomly interchanging them for awhile? That is guaranteed to have no duplicates.
Lots of answers say "once you detect Xs in a row, recalculate the last one until you don't get an X".... In practice for a game like this, that approach is millions of times faster than you need for "real-time" human interaction, so just do it!
But, you're obviously uncomfortable with it and looking for something more inherently "bounded" and elegant. So, given you're generating numbers from 1..6, when you detect 2 Xs you already know the next one could be a duplicate, so there are only 5 valid values: generate a random number from 1 to 5, and if it's >= X, increment it by one more.
That works a bit like this:
1..6 -> 3
1..6 -> 3
"oh no, we've got two 3s in a row"
1..5 -> ?
< "X"/3 i.e. 1, 2 use as is
>= "X" 3, 4, 5, add 1 to produce 4, 5 or 6.
Then you know the last two elements differ... the latter would take up the first spot when you resume checking for 2 elements in a row....
vector<BlockType> constructRow()
{
vector<BlockType> row;
row.push_back(STAR); row.push_back(STAR);
row.push_back(UP_TRIANGLE); row.push_back(UP_TRIANGLE);
row.push_back(DOWN_TRIANGLE); row.push_back(DOWN_TRIANGLE);
row.push_back(CIRCLE); row.push_back(CIRCLE);
row.push_back(HEART); row.push_back(HEART);
row.push_back(DIAMOND); row.push_back(DIAMOND);
do
{
random_shuffle(row.begin(), row.end());
}while(rowCheckFails(row));
return row;
}
The idea is to use random_shuffle() here. You need to implement rowCheckFails() that satisfies the requirement.
EDIT
I may not understand your requirement properly. That's why I've put 2 of each block type in the row. You may need to put more.
I think you would be better served to hide your random number generation behind a method or function. It could be a method or function that returns three random numbers at once, making sure that there are at least two distinct numbers in your output. It could also be a stream generator that makes sure that it never outputs three identical numbers in a row.
int[] get_random() {
int[] ret;
ret[0] = rand() % 6 + 1;
ret[1] = rand() % 6 + 1;
ret[2] = rand() % 6 + 1;
if (ret[0] == ret[1] && ret[1] == ret[2]) {
int replacement;
do {
replacement = rand() % 6 + 1;
} while (replacement == ret[0]);
ret[rand() % 3] = replacement;
}
return ret;
}
If you wanted six random numbers (it's a little difficult for me to tell, and the video was just baffling :) then it'll be a little more effort to generate the if condition:
for (int i=0; i<4; i++) {
if (ret[i] == ret[i+1] && ret[i+1] == ret[i+2])
/* three in a row */
If you always change ret[1] (the middle of the three) you'll never have three-in-a-row as a result of the change, but the output won't be random either: X Y X will happen more often than X X Y because it can happen by random chance and by being forced in the event of X X X.
First some comments on the above solutions.
There is nothing wrong with the techniques that involve rejecting a random value if it isn't satisfactory. This is an example of rejection sampling, a widely used technique. For example, several algorithms for generating a random gaussian involve rejection sampling. One, the polar rejection method, involves repeatedly drawing a pair of numbers from U(-1,1) until both are non-zero and do not lie outside the unit circle. This throws out over 21% of the pairs. After finding a satisfactory pair, a simple transformation yields a pair of gaussian deviates. (The polar rejection method is now falling out of favor, being replaced by the ziggurat algorithm. That too uses a rejection sampling.)
There is something very much wrong with rand() % 6. Don't do this. Ever. The low order bits from a random number generator, even a good random number generator, are not quite as "random" as are the high order bits.
There is something very much wrong with rand(), period. Most compiler writers apparently don't know beans about producing random numbers. Don't use rand().
Now a solution that uses the Boost random number library:
vector<Block> BlockField::ConstructRow(
unsigned int max_run) // Maximum number of consecutive duplicates allowed
{
// The Mersenne Twister produces high quality random numbers ...
boost::mt19937 rng;
// ... but we want numbers between 1 and 6 ...
boost::uniform_int<> six(1,6);
// ... so we need to glue the rng to our desired output.
boost::variate_generator<boost::mt19937&, boost::uniform_int<> >
roll_die(rng, six);
vector<Block> row;
int prev = 0;
int run_length = 0;
for (int ii=0; ii<6; ++ii) {
int next;
do {
next = roll_die();
run_length = (next == prev) ? run_length+1 : 0;
} while (run_length > max_run);
row.push_back(Block(next));
prev = next;
}
return row;
}
I know that this already has many answers, but a thought just occurred to me. You could have 7 arrays, one with all 6 digits, and one for each missing a given digit. Like this:
int v[7][6] = {
{1, 2, 3, 4, 5, 6 },
{2, 3, 4, 5, 6, 0 }, // zeros in here to make the code simpler,
{1, 3, 4, 5, 6, 0 }, // they are never used
{1, 2, 4, 5, 6, 0 },
{1, 2, 3, 5, 6, 0 },
{1, 2, 3, 4, 6, 0 },
{1, 2, 3, 4, 5, 0 }
};
Then you can have a 2 level history. Finally to generate a number, if your match history is less than the max, shuffle v[0] and take v[0][0]. Otherwise, shuffle the first 5 values from v[n] and take v[n][0]. Something like this:
#include <algorithm>
int generate() {
static int prev = -1;
static int repeat_count = 1;
static int v[7][6] = {
{1, 2, 3, 4, 5, 6 },
{2, 3, 4, 5, 6, 0 }, // zeros in here to make the code simpler,
{1, 3, 4, 5, 6, 0 }, // they are never used
{1, 2, 4, 5, 6, 0 },
{1, 2, 3, 5, 6, 0 },
{1, 2, 3, 4, 6, 0 },
{1, 2, 3, 4, 5, 0 }
};
int r;
if(repeat_count < 2) {
std::random_shuffle(v[0], v[0] + 6);
r = v[0][0];
} else {
std::random_shuffle(v[prev], v[prev] + 5);
r = v[prev][0];
}
if(r == prev) {
++repeat_count;
} else {
repeat_count = 1;
}
prev = r;
return r;
}
This should result in good randomness (not reliant of rand() % N), no infinite loops, and should be fairly efficient given the small amount of numbers that we are shuffling each time.
Note, due to the use of statics, this is not thread safe, that may be fine for your usages, if it is not, then you probably want to wrap this up in an object, each with its own state.
Duplicate:
Unique random numbers in O(1)?
I want an pseudo random number generator that can generate numbers with no repeats in a random order.
For example:
random(10)
might return
5, 9, 1, 4, 2, 8, 3, 7, 6, 10
Is there a better way to do it other than making the range of numbers and shuffling them about, or checking the generated list for repeats?
Edit:
Also I want it to be efficient in generating big numbers without the entire range.
Edit:
I see everyone suggesting shuffle algorithms. But if I want to generate large random number (1024 byte+) then that method would take alot more memory than if I just used a regular RNG and inserted into a Set until it was a specified length, right? Is there no better mathematical algorithm for this.
You may be interested in a linear feedback shift register.
We used to build these out of hardware, but I've also done them in software. It uses a shift register with some of the bits xor'ed and fed back to the input, and if you pick just the right "taps" you can get a sequence that's as long as the register size. That is, a 16-bit lfsr can produce a sequence 65535 long with no repeats. It's statistically random but of course eminently repeatable. Also, if it's done wrong, you can get some embarrassingly short sequences. If you look up the lfsr, you will find examples of how to construct them properly (which is to say, "maximal length").
A shuffle is a perfectly good way to do this (provided you do not introduce a bias using the naive algorithm). See Fisher-Yates shuffle.
If a random number is guaranteed to never repeat it is no longer random and the amount of randomness decreases as the numbers are generated (after nine numbers random(10) is rather predictable and even after only eight you have a 50-50 chance).
I understand tou don't want a shuffle for large ranges, since you'd have to store the whole list to do so.
Instead, use a reversible pseudo-random hash. Then feed in the values 0 1 2 3 4 5 6 etc in turn.
There are infinite numbers of hashes like this. They're not too hard to generate if they're restricted to a power of 2, but any base can be used.
Here's one that would work for example if you wanted to go through all 2^32 32 bit values. It's easiest to write because the implicit mod 2^32 of integer math works to your advantage in this case.
unsigned int reversableHash(unsigned int x)
{
x*=0xDEADBEEF;
x=x^(x>>17);
x*=0x01234567;
x+=0x88776655;
x=x^(x>>4);
x=x^(x>>9);
x*=0x91827363;
x=x^(x>>7);
x=x^(x>>11);
x=x^(x>>20);
x*=0x77773333;
return x;
}
If you don't mind mediocre randomness properties and if the number of elements allows it then you could use a linear congruential random number generator.
A shuffle is the best you can do for random numbers in a specific range with no repeats. The reason that the method you describe (randomly generate numbers and put them in a Set until you reach a specified length) is less efficient is because of duplicates. Theoretically, that algorithm might never finish. At best it will finish in an indeterminable amount of time, as compared to a shuffle, which will always run in a highly predictable amount of time.
Response to edits and comments:
If, as you indicate in the comments, the range of numbers is very large and you want to select relatively few of them at random with no repeats, then the likelihood of repeats diminishes rapidly. The bigger the difference in size between the range and the number of selections, the smaller the likelihood of repeat selections, and the better the performance will be for the select-and-check algorithm you describe in the question.
What about using GUID generator (like in the one in .NET). Granted it is not guaranteed that there will be no duplicates, however the chance getting one is pretty low.
This has been asked before - see my answer to the previous question. In a nutshell: You can use a block cipher to generate a secure (random) permutation over any range you want, without having to store the entire permutation at any point.
If you want to creating large (say, 64 bits or greater) random numbers with no repeats, then just create them. If you're using a good random number generator, that actually has enough entropy, then the odds of generating repeats are so miniscule as to not be worth worrying about.
For instance, when generating cryptographic keys, no one actually bothers checking to see if they've generated the same key before; since you're trusting your random number generator that a dedicated attacker won't be able to get the same key out, then why would you expect that you would come up with the same key accidentally?
Of course, if you have a bad random number generator (like the Debian SSL random number generator vulnerability), or are generating small enough numbers that the birthday paradox gives you a high chance of collision, then you will need to actually do something to ensure you don't get repeats. But for large random numbers with a good generator, just trust probability not to give you any repeats.
As you generate your numbers, use a Bloom filter to detect duplicates. This would use a minimal amount of memory. There would be no need to store earlier numbers in the series at all.
The trade off is that your list could not be exhaustive in your range. If your numbers are truly on the order of 256^1024, that's hardly any trade off at all.
(Of course if they are actually random on that scale, even bothering to detect duplicates is a waste of time. If every computer on earth generated a trillion random numbers that size every second for trillions of years, the chance of a collision is still absolutely negligible.)
I second gbarry's answer about using an LFSR. They are very efficient and simple to implement even in software and are guaranteed not to repeat in (2^N - 1) uses for an LFSR with an N-bit shift-register.
There are some drawbacks however: by observing a small number of outputs from the RNG, one can reconstruct the LFSR and predict all values it will generate, making them not usable for cryptography and anywhere were a good RNG is needed. The second problem is that either the all zero word or the all one (in terms of bits) word is invalid depending on the LFSR implementation. The third issue which is relevant to your question is that the maximum number generated by the LFSR is always a power of 2 - 1 (or power of 2 - 2).
The first drawback might not be an issue depending on your application. From the example you gave, it seems that you are not expecting zero to be among the answers; so, the second issue does not seem relevant to your case.
The maximum value (and thus range) problem can solved by reusing the LFSR until you get a number within your range. Here's an example:
Say you want to have numbers between 1 and 10 (as in your example). You would use a 4-bit LFSR which has a range [1, 15] inclusive. Here's a pseudo code as to how to get number in the range [1,10]:
x = LFSR.getRandomNumber();
while (x > 10) {
x = LFSR.getRandomNumber();
}
You should embed the previous code in your RNG; so that the caller wouldn't care about implementation.
Note that this would slow down your RNG if you use a large shift-register and the maximum number you want is not a power of 2 - 1.
This answer suggests some strategies for getting what you want and ensuring they are in a random order using some already well-known algorithms.
There is an inside out version of the Fisher-Yates shuffle algorithm, called the Durstenfeld version, that randomly distributes sequentially acquired items into arrays and collections while loading the array or collection.
One thing to remember is that the Fisher-Yates (AKA Knuth) shuffle or the Durstenfeld version used at load time is highly efficient with arrays of objects because only the reference pointer to the object is being moved and the object itself doesn't have to be examined or compared with any other object as part of the algorithm.
I will give both algorithms further below.
If you want really huge random numbers, on the order of 1024 bytes or more, a really good random generator that can generate unsigned bytes or words at a time will suffice. Randomly generate as many bytes or words as you need to construct the number, make it into an object with a reference pointer to it and, hey presto, you have a really huge random integer. If you need a specific really huge range, you can add a base value of zero bytes to the low-order end of the byte sequence to shift the value up. This may be your best option.
If you need to eliminate duplicates of really huge random numbers, then that is trickier. Even with really huge random numbers, removing duplicates also makes them significantly biased and not random at all. If you have a really large set of unduplicated really huge random numbers and you randomly select from the ones not yet selected, then the bias is only the bias in creating the huge values for the really huge set of numbers from which to choose. A reverse version of Durstenfeld's version of the Yates-Fisher could be used to randomly choose values from a really huge set of them, remove them from the remaining values from which to choose and insert them into a new array that is a subset and could do this with just the source and target arrays in situ. This would be very efficient.
This may be a good strategy for getting a small number of random numbers with enormous values from a really large set of them in which they are not duplicated. Just pick a random location in the source set, obtain its value, swap its value with the top element in the source set, reduce the size of the source set by one and repeat with the reduced size source set until you have chosen enough values. This is essentiall the Durstenfeld version of Fisher-Yates in reverse. You can then use the Dursenfeld version of the Fisher-Yates algorithm to insert the acquired values into the destination set. However, that is overkill since they should be randomly chosen and randomly ordered as given here.
Both algorithms assume you have some random number instance method, nextInt(int setSize), that generates a random integer from zero to setSize meaning there are setSize possible values. In this case, it will be the size of the array since the last index to the array is size-1.
The first algorithm is the Durstenfeld version of Fisher-Yates (aka Knuth) shuffle algorithm as applied to an array of arbitrary length, one that simply randomly positions integers from 0 to the length of the array into the array. The array need not be an array of integers, but can be an array of any objects that are acquired sequentially which, effectively, makes it an array of reference pointers. It is simple, short and very effective
int size = someNumber;
int[] int array = new int[size]; // here is the array to load
int location; // this will get assigned a value before used
// i will also conveniently be the value to load, but any sequentially acquired
// object will work
for (int i = 0; i <= size; i++) { // conveniently, i is also the value to load
// you can instance or acquire any object at this place in the algorithm to load
// by reference, into the array and use a pointer to it in place of j
int j = i; // in this example, j is trivially i
if (i == 0) { // first integer goes into first location
array[i] = j; // this may get swapped from here later
} else { // subsequent integers go into random locations
// the next random location will be somewhere in the locations
// already used or a new one at the end
// here we get the next random location
// to preserve true randomness without a significant bias
// it is REALLY IMPORTANT that the newest value could be
// stored in the newest location, that is,
// location has to be able to randomly have the value i
int location = nextInt(i + 1); // a random value between 0 and i
// move the random location's value to the new location
array[i] = array[location];
array[location] = j; // put the new value into the random location
} // end if...else
} // end for
Voila, you now have an already randomized array.
If you want to randomly shuffle an array you already have, here is the standard Fisher-Yates algorithm.
type[] array = new type[size];
// some code that loads array...
// randomly pick an item anywhere in the current array segment,
// swap it with the top element in the current array segment,
// then shorten the array segment by 1
// just as with the Durstenfeld version above,
// it is REALLY IMPORTANT that an element could get
// swapped with itself to avoid any bias in the randomization
type temp; // this will get assigned a value before used
int location; // this will get assigned a value before used
for (int i = arrayLength -1 ; i > 0; i--) {
int location = nextInt(i + 1);
temp = array[i];
array[i] = array[location];
array[location] = temp;
} // end for
For sequenced collections and sets, i.e. some type of list object, you could just use adds/or inserts with an index value that allows you to insert items anywhere, but it has to allow adding or appending after the current last item to avoid creating bias in the randomization.
Shuffling N elements doesn't take up excessive memory...think about it. You only swap one element at a time, so the maximum memory used is that of N+1 elements.
Assuming you have a random or pseudo-random number generator, even if it's not guaranteed to return unique values, you can implement one that returns unique values each time using this code, assuming that the upper limit remains constant (i.e. you always call it with random(10), and don't call it with random(10); random(11).
The code doesn't check for errors. You can add that yourself if you want to.
It also requires a lot of memory if you want a large range of numbers.
/* the function returns a random number between 0 and max -1
* not necessarily unique
* I assume it's written
*/
int random(int max);
/* the function returns a unique random number between 0 and max - 1 */
int unique_random(int max)
{
static int *list = NULL; /* contains a list of numbers we haven't returned */
static int in_progress = 0; /* 0 --> we haven't started randomizing numbers
* 1 --> we have started randomizing numbers
*/
static int count;
static prev_max = 0;
// initialize the list
if (!in_progress || (prev_max != max)) {
if (list != NULL) {
free(list);
}
list = malloc(sizeof(int) * max);
prev_max = max;
in_progress = 1;
count = max - 1;
int i;
for (i = max - 1; i >= 0; --i) {
list[i] = i;
}
}
/* now choose one from the list */
int index = random(count);
int retval = list[index];
/* now we throw away the returned value.
* we do this by shortening the list by 1
* and replacing the element we returned with
* the highest remaining number
*/
swap(&list[index], &list[count]);
/* when the count reaches 0 we start over */
if (count == 0) {
in_progress = 0;
free(list);
list = 0;
} else { /* reduce the counter by 1 */
count--;
}
}
/* swap two numbers */
void swap(int *x, int *y)
{
int temp = *x;
*x = *y;
*y = temp;
}
Actually, there's a minor point to make here; a random number generator which is not permitted to repeat is not random.
Suppose you wanted to generate a series of 256 random numbers without repeats.
Create a 256-bit (32-byte) memory block initialized with zeros, let's call it b
Your looping variable will be n, the number of numbers yet to be generated
Loop from n = 256 to n = 1
Generate a random number r in the range [0, n)
Find the r-th zero bit in your memory block b, let's call it p
Put p in your list of results, an array called q
Flip the p-th bit in memory block b to 1
After the n = 1 pass, you are done generating your list of numbers
Here's a short example of what I am talking about, using n = 4 initially:
**Setup**
b = 0000
q = []
**First loop pass, where n = 4**
r = 2
p = 2
b = 0010
q = [2]
**Second loop pass, where n = 3**
r = 2
p = 3
b = 0011
q = [2, 3]
**Third loop pass, where n = 2**
r = 0
p = 0
b = 1011
q = [2, 3, 0]
** Fourth and final loop pass, where n = 1**
r = 0
p = 1
b = 1111
q = [2, 3, 0, 1]
Please check answers at
Generate sequence of integers in random order without constructing the whole list upfront
and also my answer lies there as
very simple random is 1+((power(r,x)-1) mod p) will be from 1 to p for values of x from 1 to p and will be random where r and p are prime numbers and r <> p.
I asked a similar question before but mine was for the whole range of a int see Looking for a Hash Function /Ordered Int/ to /Shuffled Int/
static std::unordered_set<long> s;
long l = 0;
for(; !l && (s.end() != s.find(l)); l = generator());
v.insert(l);
generator() being your random number generator. You roll numbers as long as the entry is not in your set, then you add what you find in it. You get the idea.
I did it with long for the example, but you should make that a template if your PRNG is templatized.
Alternative is to use a cryptographically secure PRNG that will have a very low probability to generate twice the same number.
If you don't mean poor statisticall properties of generated sequence, there is one method:
Let's say you want to generate N numbers, each of 1024 bits each. You can sacrifice some bits of generated number to be "counter".
So you generate each random number, but into some bits you choosen you put binary encoded counter (from variable, you increase each time next random number is generated).
You can split that number into single bits and put it in some of less significant bits of generated number.
That way you are sure you get unique number each time.
I mean for example each generated number looks like that:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyxxxxyxyyyyxxyxx
where x is take directly from generator, and ys are taken from counter variable.
Mersenne twister
Description of which can be found here on Wikipedia: Mersenne twister
Look at the bottom of the page for implementations in various languages.
The problem is to select a "random" sequence of N unique numbers from the range 1..M where there is no constraint on the relationship between N and M (M could be much bigger, about the same, or even smaller than N; they may not be relatively prime).
Expanding on the linear feedback shift register answer: for a given M, construct a maximal LFSR for the smallest power of two that is larger than M. Then just grab your numbers from the LFSR throwing out numbers larger than M. On average, you will throw out at most half the generated numbers (since by construction more than half the range of the LFSR is less than M), so the expected running time of getting a number is O(1). You are not storing previously generated numbers so space consumption is O(1) too. If you cycle before getting N numbers then M less than N (or the LFSR is constructed incorrectly).
You can find the parameters for maximum length LFSRs up to 168 bits here (from wikipedia): http://www.xilinx.com/support/documentation/application_notes/xapp052.pdf
Here's some java code:
/**
* Generate a sequence of unique "random" numbers in [0,M)
* #author dkoes
*
*/
public class UniqueRandom
{
long lfsr;
long mask;
long max;
private static long seed = 1;
//indexed by number of bits
private static int [][] taps = {
null, // 0
null, // 1
null, // 2
{3,2}, //3
{4,3},
{5,3},
{6,5},
{7,6},
{8,6,5,4},
{9,5},
{10,7},
{11,9},
{12,6,4,1},
{13,4,3,1},
{14,5,3,1},
{15,14},
{16,15,13,4},
{17,14},
{18,11},
{19,6,2,1},
{20,17},
{21,19},
{22,21},
{23,18},
{24,23,22,17},
{25,22},
{26,6,2,1},
{27,5,2,1},
{28,25},
{29,27},
{30,6,4,1},
{31,28},
{32,22,2,1},
{33,20},
{34,27,2,1},
{35,33},
{36,25},
{37,5,4,3,2,1},
{38,6,5,1},
{39,35},
{40,38,21,19},
{41,38},
{42,41,20,19},
{43,42,38,37},
{44,43,18,17},
{45,44,42,41},
{46,45,26,25},
{47,42},
{48,47,21,20},
{49,40},
{50,49,24,23},
{51,50,36,35},
{52,49},
{53,52,38,37},
{54,53,18,17},
{55,31},
{56,55,35,34},
{57,50},
{58,39},
{59,58,38,37},
{60,59},
{61,60,46,45},
{62,61,6,5},
{63,62},
};
//m is upperbound; things break if it isn't positive
UniqueRandom(long m)
{
max = m;
lfsr = seed; //could easily pass a starting point instead
//figure out number of bits
int bits = 0;
long b = m;
while((b >>>= 1) != 0)
{
bits++;
}
bits++;
if(bits < 3)
bits = 3;
mask = 0;
for(int i = 0; i < taps[bits].length; i++)
{
mask |= (1L << (taps[bits][i]-1));
}
}
//return -1 if we've cycled
long next()
{
long ret = -1;
if(lfsr == 0)
return -1;
do {
ret = lfsr;
//update lfsr - from wikipedia
long lsb = lfsr & 1;
lfsr >>>= 1;
if(lsb == 1)
lfsr ^= mask;
if(lfsr == seed)
lfsr = 0; //cycled, stick
ret--; //zero is stuck state, never generated so sub 1 to get it
} while(ret >= max);
return ret;
}
}
Here is a way to random without repeating results. It also works for strings. Its in C# but the logig should work in many places. Put the random results in a list and check if the new random element is in that list. If not than you have a new random element. If it is in that list, repeat the random until you get an element that is not in that list.
List<string> Erledigte = new List<string>();
private void Form1_Load(object sender, EventArgs e)
{
label1.Text = "";
listBox1.Items.Add("a");
listBox1.Items.Add("b");
listBox1.Items.Add("c");
listBox1.Items.Add("d");
listBox1.Items.Add("e");
}
private void button1_Click(object sender, EventArgs e)
{
Random rand = new Random();
int index=rand.Next(0, listBox1.Items.Count);
string rndString = listBox1.Items[index].ToString();
if (listBox1.Items.Count <= Erledigte.Count)
{
return;
}
else
{
if (Erledigte.Contains(rndString))
{
//MessageBox.Show("vorhanden");
while (Erledigte.Contains(rndString))
{
index = rand.Next(0, listBox1.Items.Count);
rndString = listBox1.Items[index].ToString();
}
}
Erledigte.Add(rndString);
label1.Text += rndString;
}
}
For a sequence to be random there should not be any auto correlation. The restriction that the numbers should not repeat means the next number should depend on all the previous numbers which means it is not random anymore....
If you can generate 'small' random numbers, you can generate 'large' random numbers by integrating them: add a small random increment to each 'previous'.
const size_t amount = 100; // a limited amount of random numbers
vector<long int> numbers;
numbers.reserve( amount );
const short int spread = 250; // about 250 between each random number
numbers.push_back( myrandom( spread ) );
for( int n = 0; n != amount; ++n ) {
const short int increment = myrandom( spread );
numbers.push_back( numbers.back() + increment );
}
myshuffle( numbers );
The myrandom and myshuffle functions I hereby generously delegate to others :)
to have non repeated random numbers and to avoid waistingtime with checking for doubles numbers and get new numbers over and over use the below method which will assure the minimum usage of Rand:
for example if you want to get 100 non repeated random number:
1. fill an array with numbers from 1 to 100
2. get a random number using Rand function in the range of (1-100)
3. use the genarted random number as an Index to get th value from the array (Numbers[IndexGeneratedFromRandFunction]
4. shift the number in the array after that Index to the left
5. repeat from step 2 but now the the rang should be (1-99) and go on
now we have a array with different numbers!
int main() {
int b[(the number
if them)];
for (int i = 0; i < (the number of them); i++) {
int a = rand() % (the number of them + 1) + 1;
int j = 0;
while (j < i) {
if (a == b[j]) {
a = rand() % (the number of them + 1) + 1;
j = -1;
}
j++;
}
b[i] = a;
}
}
I have an application where I have a number of sets. A set might be
{4, 7, 12, 18}
unique numbers and all less than 50.
I then have several data items:
1 {1, 2, 4, 7, 8, 12, 18, 23, 29}
2 {3, 4, 6, 7, 15, 23, 34, 38}
3 {4, 7, 12, 18}
4 {1, 4, 7, 12, 13, 14, 15, 16, 17, 18}
5 {2, 4, 6, 7, 13, 15}
Data items 1, 3 and 4 match the set because they contain all items in the set.
I need to design a data structure that is super fast at identifying whether a data item is a member of a set includes all the members that are part of the set (so the data item is a superset of the set). My best estimates at the moment suggest that there will be fewer than 50,000 sets.
My current implementation has my sets and data as unsigned 64 bit integers and the sets stored in a list. Then to check a data item I iterate through the list doing a ((set & data) == set) comparison. It works and it's space efficient but it's slow (O(n)) and I'd be happy to trade some memory for some performance. Does anyone have any better ideas about how to organize this?
Edit:
Thanks very much for all the answers. It looks like I need to provide some more information about the problem. I get the sets first and I then get the data items one by one. I need to check whether the data item is matches one of the sets.
The sets are very likely to be 'clumpy' for example for a given problem 1, 3 and 9 might be contained in 95% of sets; I can predict this to some degree in advance (but not well).
For those suggesting memoization: this is this the data structure for a memoized function. The sets represent general solutions that have already been computed and the data items are new inputs to the function. By matching a data item to a general solution I can avoid a whole lot of processing.
I see another solution which is dual to yours (i.e., testing a data item against every set) and that is using a binary tree where each node tests whether a specific item is included or not.
For instance if you had the sets A = { 2, 3 } and B = { 4 } and C = { 1, 3 } you'd have the following tree
_NOT_HAVE_[1]___HAVE____
| |
_____[2]_____ _____[2]_____
| | | |
__[3]__ __[3]__ __[3]__ __[3]__
| | | | | | | |
[4] [4] [4] [4] [4] [4] [4] [4]
/ \ / \ / \ / \ / \ / \ / \ / \
. B . B . B . B B C B A A A A
C B C B
C
After making the tree, you'd simply need to make 50 comparisons---or how ever many items you can have in a set.
For instance, for { 1, 4 }, you branch through the tree : right (the set has 1), left (doesn't have 2), left, right, and you get [ B ], meaning only set B is included in { 1, 4 }.
This is basically called a "Binary Decision Diagram". If you are offended by the redundancy in the nodes (as you should be, because 2^50 is a lot of nodes...) then you should consider the reduced form, which is called a "Reduced, Ordered Binary Decision Diagram" and is a commonly used data-structure. In this version, nodes are merged when they are redundant, and you no longer have a binary tree, but a directed acyclic graph.
The Wikipedia page on ROBBDs can provide you with more information, as well as links to libraries which implement this data-structure for various languages.
I can't prove it, but I'm fairly certain that there is no solution that can easily beat the O(n) bound. Your problem is "too general": every set has m = 50 properties (namely, property k is that it contains the number k) and the point is that all these properties are independent of each other. There aren't any clever combinations of properties that can predict the presence of other properties. Sorting doesn't work because the problem is very symmetric, any permutation of your 50 numbers will give the same problem but screw up any kind of ordering. Unless your input has a hidden structure, you're out of luck.
However, there is some room for speed / memory tradeoffs. Namely, you can precompute the answers for small queries. Let Q be a query set, and supersets(Q) be the collection of sets that contain Q, i.e. the solution to your problem. Then, your problem has the following key property
Q ⊆ P => supersets(Q) ⊇ supersets(P)
In other words, the results for P = {1,3,4} are a subcollection of the results for Q = {1,3}.
Now, precompute all answers for small queries. For demonstration, let's take all queries of size <= 3. You'll get a table
supersets({1})
supersets({2})
...
supersets({50})
supersets({1,2})
supersets({2,3})
...
supersets({1,2,3})
supersets({1,2,4})
...
supersets({48,49,50})
with O(m^3) entries. To compute, say, supersets({1,2,3,4}), you look up superset({1,2,3}) and run your linear algorithm on this collection. The point is that on average, superset({1,2,3}) will not contain the full n = 50,000 elements, but only a fraction n/2^3 = 6250 of those, giving an 8-fold increase in speed.
(This is a generalization of the "reverse index" method that other answers suggested.)
Depending on your data set, memory use will be rather terrible, though. But you might be able to omit some rows or speed up the algorithm by noting that a query like {1,2,3,4} can be calculated from several different precomputed answers, like supersets({1,2,3}) and supersets({1,2,4}), and you'll use the smallest of these.
If you're going to improve performance, you're going to have to do something fancy to reduce the number of set comparisons you make.
Maybe you can partition the data items so that you have all those where 1 is the smallest element in one group, and all those where 2 is the smallest item in another group, and so on.
When it comes to searching, you find the smallest value in the search set, and look at the group where that value is present.
Or, perhaps, group them into 50 groups by 'this data item contains N' for N = 1..50.
When it comes to searching, you find the size of each group that holds each element of the set, and then search just the smallest group.
The concern with this - especially the latter - is that the overhead of reducing the search time might outweigh the performance benefit from the reduced search space.
You could use inverted index of your data items. For your example
1 {1, 2, 4, 7, 8, 12, 18, 23, 29}
2 {3, 4, 6, 7, 15, 23, 34, 38}
3 {4, 7, 12, 18}
4 {1, 4, 7, 12, 13, 14, 15, 16, 17, 18}
5 {2, 4, 6, 7, 13, 15}
the inverted index will be
1: {1, 4}
2: {1, 5}
3: {2}
4: {1, 2, 3, 4, 5}
5: {}
6: {2, 5}
...
So, for any particular set {x_0, x_1, ..., x_i} you need to intersect sets for x_0, x_1 and others. For example, for the set {2,3,4} you need to intersect {1,5} with {2} and with {1,2,3,4,5}. Because you could have all your sets in inverted index sorted, you could intersect sets in min of lengths of sets that are to be intersected.
Here could be an issue, if you have very 'popular' items (as 4 in our example) with huge index set.
Some words about intersecting. You could use sorted lists in inverted index, and intersect sets in pairs (in increasing length order). Or as you have no more than 50K items, you could use compressed bit sets (about 6Kb for every number, fewer for sparse bit sets, about 50 numbers, not so greedily), and intersect bit sets bitwise. For sparse bit sets that will be efficiently, I think.
A possible way to divvy up the list of bitmaps, would be to create an array of (Compiled Nibble Indicators)
Let's say one of your 64 bit bitmaps has the bit 0 to bit 8 set.
In hex we can look at it as 0x000000000000001F
Now, let's transform that into a simpler and smaller representation.
Each 4 bit Nibble, either has at least one bit set, or not.
If it does, we represent it as a 1, if not we represent it as a 0.
So the hex value reduces to bit pattern 0000000000000011, as the right hand 2 nibbles have are the only ones that have bits in them. Create an array, that holds 65536 values, and use them as a head of linked lists, or set of large arrays....
Compile each of your bit maps, into it's compact CNI. Add it to the correct list, until all of the lists have been compiled.
Then take your needle. Compile it into its CNI form. Use that to value, to subscript to the head of the list. All bitmaps in that list have a possibility of being a match.
All bitmaps in the other lists can not match.
That is a way to divvy them up.
Now in practice, I doubt a linked list would meet your performance requirements.
If you write a function to compile a bit map to CNI, you could use it as a basis to sort your array by the CNI. Then have your array of 65536 heads, simply subscript into the original array as the start of a range.
Another technique would be to just compile a part of the 64 bit bitmap, so you have fewer heads. Analysis of your patterns should give you an idea of what nibbles are most effective in partitioning them up.
Good luck to you, and please let us know what you finally end up doing.
Evil.
The index of the sets that match the search criterion resemble the sets themselves. Instead of having the unique indexes less than 50, we have unique indexes less than 50000. Since you don't mind using a bit of memory, you can precompute matching sets in a 50 element array of 50000 bit integers. Then you index into the precomputed matches and basically just do your ((set & data) == set) but on the 50000 bit numbers which represent the matching sets. Here's what I mean.
#include <iostream>
enum
{
max_sets = 50000, // should be >= 64
num_boxes = max_sets / 64 + 1,
max_entry = 50
};
uint64_t sets_containing[max_entry][num_boxes];
#define _(x) (uint64_t(1) << x)
uint64_t sets[] =
{
_(1) | _(2) | _(4) | _(7) | _(8) | _(12) | _(18) | _(23) | _(29),
_(3) | _(4) | _(6) | _(7) | _(15) | _(23) | _(34) | _(38),
_(4) | _(7) | _(12) | _(18),
_(1) | _(4) | _(7) | _(12) | _(13) | _(14) | _(15) | _(16) | _(17) | _(18),
_(2) | _(4) | _(6) | _(7) | _(13) | _(15),
0,
};
void big_and_equals(uint64_t lhs[num_boxes], uint64_t rhs[num_boxes])
{
static int comparison_counter = 0;
for (int i = 0; i < num_boxes; ++i, ++comparison_counter)
{
lhs[i] &= rhs[i];
}
std::cout
<< "performed "
<< comparison_counter
<< " comparisons"
<< std::endl;
}
int main()
{
// Precompute matches
memset(sets_containing, 0, sizeof(uint64_t) * max_entry * num_boxes);
int set_number = 0;
for (uint64_t* p = &sets[0]; *p; ++p, ++set_number)
{
int entry = 0;
for (uint64_t set = *p; set; set >>= 1, ++entry)
{
if (set & 1)
{
std::cout
<< "sets_containing["
<< entry
<< "]["
<< (set_number / 64)
<< "] gets bit "
<< set_number % 64
<< std::endl;
uint64_t& flag_location =
sets_containing[entry][set_number / 64];
flag_location |= _(set_number % 64);
}
}
}
// Perform search for a key
int key[] = {4, 7, 12, 18};
uint64_t answer[num_boxes];
memset(answer, 0xff, sizeof(uint64_t) * num_boxes);
for (int i = 0; i < sizeof(key) / sizeof(key[0]); ++i)
{
big_and_equals(answer, sets_containing[key[i]]);
}
// Display the matches
for (int set_number = 0; set_number < max_sets; ++set_number)
{
if (answer[set_number / 64] & _(set_number % 64))
{
std::cout
<< "set "
<< set_number
<< " matches"
<< std::endl;
}
}
return 0;
}
Running this program yields:
sets_containing[1][0] gets bit 0
sets_containing[2][0] gets bit 0
sets_containing[4][0] gets bit 0
sets_containing[7][0] gets bit 0
sets_containing[8][0] gets bit 0
sets_containing[12][0] gets bit 0
sets_containing[18][0] gets bit 0
sets_containing[23][0] gets bit 0
sets_containing[29][0] gets bit 0
sets_containing[3][0] gets bit 1
sets_containing[4][0] gets bit 1
sets_containing[6][0] gets bit 1
sets_containing[7][0] gets bit 1
sets_containing[15][0] gets bit 1
sets_containing[23][0] gets bit 1
sets_containing[34][0] gets bit 1
sets_containing[38][0] gets bit 1
sets_containing[4][0] gets bit 2
sets_containing[7][0] gets bit 2
sets_containing[12][0] gets bit 2
sets_containing[18][0] gets bit 2
sets_containing[1][0] gets bit 3
sets_containing[4][0] gets bit 3
sets_containing[7][0] gets bit 3
sets_containing[12][0] gets bit 3
sets_containing[13][0] gets bit 3
sets_containing[14][0] gets bit 3
sets_containing[15][0] gets bit 3
sets_containing[16][0] gets bit 3
sets_containing[17][0] gets bit 3
sets_containing[18][0] gets bit 3
sets_containing[2][0] gets bit 4
sets_containing[4][0] gets bit 4
sets_containing[6][0] gets bit 4
sets_containing[7][0] gets bit 4
sets_containing[13][0] gets bit 4
sets_containing[15][0] gets bit 4
performed 782 comparisons
performed 1564 comparisons
performed 2346 comparisons
performed 3128 comparisons
set 0 matches
set 2 matches
set 3 matches
3128 uint64_t comparisons beats 50000 comparisons so you win. Even in the worst case, which would be a key which has all 50 items, you only have to do num_boxes * max_entry comparisons which in this case is 39100. Still better than 50000.
Since the numbers are less than 50, you could build a one-to-one hash using a 64-bit integer and then use bitwise operations to test the sets in O(1) time. The hash creation would also be O(1). I think either an XOR followed by a test for zero or an AND followed by a test for equality would work. (If I understood the problem correctly.)
Put your sets into an array (not a linked list) and SORT THEM. The sorting criteria can be either 1) the number of elements in the set (number of 1-bits in the set representation), or 2) the lowest element in the set. For example, let A={7, 10, 16} and B={11, 17}. Then B<A under criterion 1), and A<B under criterion 2). Sorting is O(n log n), but I assume that you can afford some preprocessing time, i.e., that the search structure is static.
When a new data item arrives, you can use binary search (logarithmic time) to find the starting candidate set in the array. Then you search linearly through the array and test the data item against the set in the array until the data item becomes "greater" than the set.
You should choose your sorting criterion based on the spread of your sets. If all sets have 0 as their lowest element, you shouldn't choose criterion 2). Vice-versa, if the distribution of set cardinalities is not uniform, you shouldn't choose criterion 1).
Yet another, more robust, sorting criterion would be to compute the span of elements in each set, and sort them according to that. For example, the lowest element in set A is 7, and highest is 16, so you would encode its span as 0x1007; similarly the B's span would be 0x110B. Sort the sets according to the "span code" and again use binary search to find all sets with the same "span code" as your data item.
Computing the "span code" is slow in ordinary C, but it can be done fast if you resort to assembly -- most CPUs have instructions that find the most/least significant set bit.
This is not a real answer more an observation: this problem looks like it could be efficiently parallellized or even distributed, which would at least reduce the running time to O(n / number of cores)
You can build a reverse index of "haystack" lists that contain each element:
std::set<int> needle; // {4, 7, 12, 18}
std::vector<std::set<int>> haystacks;
// A list of your each of your data sets:
// 1 {1, 2, 4, 7, 8, 12, 18, 23, 29}
// 2 {3, 4, 6, 7, 15, 23, 34, 38}
// 3 {4, 7, 12, 18}
// 4 {1, 4, 7, 12, 13, 14, 15, 16, 17, 18}
// 5 {2, 4, 6, 7, 13,
std::hash_map[int, set<int>> element_haystacks;
// element_haystacks maps each integer to the sets that contain it
// (the key is the integers from the haystacks sets, and
// the set values are the index into the 'haystacks' vector):
// 1 -> {1, 4} Element 1 is in sets 1 and 4.
// 2 -> {1, 5} Element 2 is in sets 2 and 4.
// 3 -> {2} Element 3 is in set 3.
// 4 -> {1, 2, 3, 4, 5} Element 4 is in sets 1 through 5.
std::set<int> answer_sets; // The list of haystack sets that contain your set.
for (set<int>::const_iterator it = needle.begin(); it != needle.end(); ++it) {
const std::set<int> &new_answer = element_haystacks[i];
std::set<int> existing_answer;
std::swap(existing_answer, answer_sets);
// Remove all answers that don't occur in the new element list.
std::set_intersection(existing_answer.begin(), existing_answer.end(),
new_answer.begin(), new_answer.end(),
inserter(answer_sets, answer_sets.begin()));
if (answer_sets.empty()) break; // No matches :(
}
// answer_sets now lists the haystack_ids that include all your needle elements.
for (int i = 0; i < answer_sets.size(); ++i) {
cout << "set: " << element_haystacks[answer_sets[i]];
}
If I'm not mistaken, this will have a max runtime of O(k*m), where is the avg number of sets that an integer belongs to and m is the avg size of the needle set (<50). Unfortunately, it'll have a significant memory overhead due to building the reverse mapping (element_haystacks).
I'm sure you could improve this a bit if you stored sorted vectors instead of sets and element_haystacks could be a 50 element vector instead of a hash_map.
I'm surprised no one has mentioned that the STL contains an algorithm to handle this sort of thing for you. Hence, you should use includes. As it describes it performs at most 2*(N+M)-1 comparisons for a worst case performance of O(M+N).
Hence:
bool isContained = includes( myVector.begin(), myVector.end(), another.begin(), another.end() );
if you're needing O( log N ) time, I'll have to yield to the other responders.
Another idea is to completely prehunt your elephants.
Setup
Create a 64 bit X 50,000 element bit array.
Analyze your search set, and set the corresponding bits in each row.
Save the bit map to disk, so it can be reloaded as needed.
Searching
Load the element bit array into memory.
Create a bit map array, 1 X 50000. Set all of the values to 1. This is the search bit array
Take your needle, and walk though each value. Use it as a subscript into the element bit array. Take the corresponding bit array, then AND it into the search array.
Do that for all values in your needle, and your search bit array, will hold a 1,
for every matching element.
Reconstruct
Walk through the search bit array, and for each 1, you can use the element bit array, to reconstruct the original values.
How many data items do you have? Are they really all unique? Could you cache popular data items, or use a bucket/radix sort before the run to group repeated items together?
Here is an indexing approach:
1) Divide the 50-bit field into e.g. 10 5-bit sub-fields. If you really have 50K sets then 3 17-bit chunks might be nearer the mark.
2) For each set, choose a single subfield. A good choice is the sub-field where that set has the most bits set, with ties broken almost arbitrarily - e.g. use the leftmost such sub-field.
3) For each possible bit-pattern in each sub-field note down the list of sets which are allocated to that sub-field and match that pattern, considering only the sub-field.
4) Given a new data item, divide it into its 5-bit chunks and look each up in its own lookup table to get a list of sets to test against. If your data is completely random you get a factor of two speedup or more, depending on how many bits are set in the densest sub-field of each set. If an adversary gets to make up random data for you, perhaps they find data items that almost but not quite match loads of sets and you don't do very well at all.
Possibly there is scope for taking advantage of any structure in your sets, by numbering bits so that sets tend to have two or more bits in their best sub-field - e.g. do cluster analysis on the bits, treating them as similar if they tend to appear together in sets. Or if you can predict patterns in the data items, alter the allocation of sets to sub-fields in step(2) to reduce the number of expected false matches.
Addition:
How many tables would need to have to guarantee that any 2 bits always fall into the same table? If you look at the combinatorial definition in http://en.wikipedia.org/wiki/Projective_plane, you can see that there is a way to extract collections of 7 bits from 57 (=1 + 7 + 49) bits in 57 different ways so that for any two bits at least one collection contains both of them. Probably not very useful, but it's still an answer.
Duplicate:
Unique random numbers in O(1)?
I want an pseudo random number generator that can generate numbers with no repeats in a random order.
For example:
random(10)
might return
5, 9, 1, 4, 2, 8, 3, 7, 6, 10
Is there a better way to do it other than making the range of numbers and shuffling them about, or checking the generated list for repeats?
Edit:
Also I want it to be efficient in generating big numbers without the entire range.
Edit:
I see everyone suggesting shuffle algorithms. But if I want to generate large random number (1024 byte+) then that method would take alot more memory than if I just used a regular RNG and inserted into a Set until it was a specified length, right? Is there no better mathematical algorithm for this.
You may be interested in a linear feedback shift register.
We used to build these out of hardware, but I've also done them in software. It uses a shift register with some of the bits xor'ed and fed back to the input, and if you pick just the right "taps" you can get a sequence that's as long as the register size. That is, a 16-bit lfsr can produce a sequence 65535 long with no repeats. It's statistically random but of course eminently repeatable. Also, if it's done wrong, you can get some embarrassingly short sequences. If you look up the lfsr, you will find examples of how to construct them properly (which is to say, "maximal length").
A shuffle is a perfectly good way to do this (provided you do not introduce a bias using the naive algorithm). See Fisher-Yates shuffle.
If a random number is guaranteed to never repeat it is no longer random and the amount of randomness decreases as the numbers are generated (after nine numbers random(10) is rather predictable and even after only eight you have a 50-50 chance).
I understand tou don't want a shuffle for large ranges, since you'd have to store the whole list to do so.
Instead, use a reversible pseudo-random hash. Then feed in the values 0 1 2 3 4 5 6 etc in turn.
There are infinite numbers of hashes like this. They're not too hard to generate if they're restricted to a power of 2, but any base can be used.
Here's one that would work for example if you wanted to go through all 2^32 32 bit values. It's easiest to write because the implicit mod 2^32 of integer math works to your advantage in this case.
unsigned int reversableHash(unsigned int x)
{
x*=0xDEADBEEF;
x=x^(x>>17);
x*=0x01234567;
x+=0x88776655;
x=x^(x>>4);
x=x^(x>>9);
x*=0x91827363;
x=x^(x>>7);
x=x^(x>>11);
x=x^(x>>20);
x*=0x77773333;
return x;
}
If you don't mind mediocre randomness properties and if the number of elements allows it then you could use a linear congruential random number generator.
A shuffle is the best you can do for random numbers in a specific range with no repeats. The reason that the method you describe (randomly generate numbers and put them in a Set until you reach a specified length) is less efficient is because of duplicates. Theoretically, that algorithm might never finish. At best it will finish in an indeterminable amount of time, as compared to a shuffle, which will always run in a highly predictable amount of time.
Response to edits and comments:
If, as you indicate in the comments, the range of numbers is very large and you want to select relatively few of them at random with no repeats, then the likelihood of repeats diminishes rapidly. The bigger the difference in size between the range and the number of selections, the smaller the likelihood of repeat selections, and the better the performance will be for the select-and-check algorithm you describe in the question.
What about using GUID generator (like in the one in .NET). Granted it is not guaranteed that there will be no duplicates, however the chance getting one is pretty low.
This has been asked before - see my answer to the previous question. In a nutshell: You can use a block cipher to generate a secure (random) permutation over any range you want, without having to store the entire permutation at any point.
If you want to creating large (say, 64 bits or greater) random numbers with no repeats, then just create them. If you're using a good random number generator, that actually has enough entropy, then the odds of generating repeats are so miniscule as to not be worth worrying about.
For instance, when generating cryptographic keys, no one actually bothers checking to see if they've generated the same key before; since you're trusting your random number generator that a dedicated attacker won't be able to get the same key out, then why would you expect that you would come up with the same key accidentally?
Of course, if you have a bad random number generator (like the Debian SSL random number generator vulnerability), or are generating small enough numbers that the birthday paradox gives you a high chance of collision, then you will need to actually do something to ensure you don't get repeats. But for large random numbers with a good generator, just trust probability not to give you any repeats.
As you generate your numbers, use a Bloom filter to detect duplicates. This would use a minimal amount of memory. There would be no need to store earlier numbers in the series at all.
The trade off is that your list could not be exhaustive in your range. If your numbers are truly on the order of 256^1024, that's hardly any trade off at all.
(Of course if they are actually random on that scale, even bothering to detect duplicates is a waste of time. If every computer on earth generated a trillion random numbers that size every second for trillions of years, the chance of a collision is still absolutely negligible.)
I second gbarry's answer about using an LFSR. They are very efficient and simple to implement even in software and are guaranteed not to repeat in (2^N - 1) uses for an LFSR with an N-bit shift-register.
There are some drawbacks however: by observing a small number of outputs from the RNG, one can reconstruct the LFSR and predict all values it will generate, making them not usable for cryptography and anywhere were a good RNG is needed. The second problem is that either the all zero word or the all one (in terms of bits) word is invalid depending on the LFSR implementation. The third issue which is relevant to your question is that the maximum number generated by the LFSR is always a power of 2 - 1 (or power of 2 - 2).
The first drawback might not be an issue depending on your application. From the example you gave, it seems that you are not expecting zero to be among the answers; so, the second issue does not seem relevant to your case.
The maximum value (and thus range) problem can solved by reusing the LFSR until you get a number within your range. Here's an example:
Say you want to have numbers between 1 and 10 (as in your example). You would use a 4-bit LFSR which has a range [1, 15] inclusive. Here's a pseudo code as to how to get number in the range [1,10]:
x = LFSR.getRandomNumber();
while (x > 10) {
x = LFSR.getRandomNumber();
}
You should embed the previous code in your RNG; so that the caller wouldn't care about implementation.
Note that this would slow down your RNG if you use a large shift-register and the maximum number you want is not a power of 2 - 1.
This answer suggests some strategies for getting what you want and ensuring they are in a random order using some already well-known algorithms.
There is an inside out version of the Fisher-Yates shuffle algorithm, called the Durstenfeld version, that randomly distributes sequentially acquired items into arrays and collections while loading the array or collection.
One thing to remember is that the Fisher-Yates (AKA Knuth) shuffle or the Durstenfeld version used at load time is highly efficient with arrays of objects because only the reference pointer to the object is being moved and the object itself doesn't have to be examined or compared with any other object as part of the algorithm.
I will give both algorithms further below.
If you want really huge random numbers, on the order of 1024 bytes or more, a really good random generator that can generate unsigned bytes or words at a time will suffice. Randomly generate as many bytes or words as you need to construct the number, make it into an object with a reference pointer to it and, hey presto, you have a really huge random integer. If you need a specific really huge range, you can add a base value of zero bytes to the low-order end of the byte sequence to shift the value up. This may be your best option.
If you need to eliminate duplicates of really huge random numbers, then that is trickier. Even with really huge random numbers, removing duplicates also makes them significantly biased and not random at all. If you have a really large set of unduplicated really huge random numbers and you randomly select from the ones not yet selected, then the bias is only the bias in creating the huge values for the really huge set of numbers from which to choose. A reverse version of Durstenfeld's version of the Yates-Fisher could be used to randomly choose values from a really huge set of them, remove them from the remaining values from which to choose and insert them into a new array that is a subset and could do this with just the source and target arrays in situ. This would be very efficient.
This may be a good strategy for getting a small number of random numbers with enormous values from a really large set of them in which they are not duplicated. Just pick a random location in the source set, obtain its value, swap its value with the top element in the source set, reduce the size of the source set by one and repeat with the reduced size source set until you have chosen enough values. This is essentiall the Durstenfeld version of Fisher-Yates in reverse. You can then use the Dursenfeld version of the Fisher-Yates algorithm to insert the acquired values into the destination set. However, that is overkill since they should be randomly chosen and randomly ordered as given here.
Both algorithms assume you have some random number instance method, nextInt(int setSize), that generates a random integer from zero to setSize meaning there are setSize possible values. In this case, it will be the size of the array since the last index to the array is size-1.
The first algorithm is the Durstenfeld version of Fisher-Yates (aka Knuth) shuffle algorithm as applied to an array of arbitrary length, one that simply randomly positions integers from 0 to the length of the array into the array. The array need not be an array of integers, but can be an array of any objects that are acquired sequentially which, effectively, makes it an array of reference pointers. It is simple, short and very effective
int size = someNumber;
int[] int array = new int[size]; // here is the array to load
int location; // this will get assigned a value before used
// i will also conveniently be the value to load, but any sequentially acquired
// object will work
for (int i = 0; i <= size; i++) { // conveniently, i is also the value to load
// you can instance or acquire any object at this place in the algorithm to load
// by reference, into the array and use a pointer to it in place of j
int j = i; // in this example, j is trivially i
if (i == 0) { // first integer goes into first location
array[i] = j; // this may get swapped from here later
} else { // subsequent integers go into random locations
// the next random location will be somewhere in the locations
// already used or a new one at the end
// here we get the next random location
// to preserve true randomness without a significant bias
// it is REALLY IMPORTANT that the newest value could be
// stored in the newest location, that is,
// location has to be able to randomly have the value i
int location = nextInt(i + 1); // a random value between 0 and i
// move the random location's value to the new location
array[i] = array[location];
array[location] = j; // put the new value into the random location
} // end if...else
} // end for
Voila, you now have an already randomized array.
If you want to randomly shuffle an array you already have, here is the standard Fisher-Yates algorithm.
type[] array = new type[size];
// some code that loads array...
// randomly pick an item anywhere in the current array segment,
// swap it with the top element in the current array segment,
// then shorten the array segment by 1
// just as with the Durstenfeld version above,
// it is REALLY IMPORTANT that an element could get
// swapped with itself to avoid any bias in the randomization
type temp; // this will get assigned a value before used
int location; // this will get assigned a value before used
for (int i = arrayLength -1 ; i > 0; i--) {
int location = nextInt(i + 1);
temp = array[i];
array[i] = array[location];
array[location] = temp;
} // end for
For sequenced collections and sets, i.e. some type of list object, you could just use adds/or inserts with an index value that allows you to insert items anywhere, but it has to allow adding or appending after the current last item to avoid creating bias in the randomization.
Shuffling N elements doesn't take up excessive memory...think about it. You only swap one element at a time, so the maximum memory used is that of N+1 elements.
Assuming you have a random or pseudo-random number generator, even if it's not guaranteed to return unique values, you can implement one that returns unique values each time using this code, assuming that the upper limit remains constant (i.e. you always call it with random(10), and don't call it with random(10); random(11).
The code doesn't check for errors. You can add that yourself if you want to.
It also requires a lot of memory if you want a large range of numbers.
/* the function returns a random number between 0 and max -1
* not necessarily unique
* I assume it's written
*/
int random(int max);
/* the function returns a unique random number between 0 and max - 1 */
int unique_random(int max)
{
static int *list = NULL; /* contains a list of numbers we haven't returned */
static int in_progress = 0; /* 0 --> we haven't started randomizing numbers
* 1 --> we have started randomizing numbers
*/
static int count;
static prev_max = 0;
// initialize the list
if (!in_progress || (prev_max != max)) {
if (list != NULL) {
free(list);
}
list = malloc(sizeof(int) * max);
prev_max = max;
in_progress = 1;
count = max - 1;
int i;
for (i = max - 1; i >= 0; --i) {
list[i] = i;
}
}
/* now choose one from the list */
int index = random(count);
int retval = list[index];
/* now we throw away the returned value.
* we do this by shortening the list by 1
* and replacing the element we returned with
* the highest remaining number
*/
swap(&list[index], &list[count]);
/* when the count reaches 0 we start over */
if (count == 0) {
in_progress = 0;
free(list);
list = 0;
} else { /* reduce the counter by 1 */
count--;
}
}
/* swap two numbers */
void swap(int *x, int *y)
{
int temp = *x;
*x = *y;
*y = temp;
}
Actually, there's a minor point to make here; a random number generator which is not permitted to repeat is not random.
Suppose you wanted to generate a series of 256 random numbers without repeats.
Create a 256-bit (32-byte) memory block initialized with zeros, let's call it b
Your looping variable will be n, the number of numbers yet to be generated
Loop from n = 256 to n = 1
Generate a random number r in the range [0, n)
Find the r-th zero bit in your memory block b, let's call it p
Put p in your list of results, an array called q
Flip the p-th bit in memory block b to 1
After the n = 1 pass, you are done generating your list of numbers
Here's a short example of what I am talking about, using n = 4 initially:
**Setup**
b = 0000
q = []
**First loop pass, where n = 4**
r = 2
p = 2
b = 0010
q = [2]
**Second loop pass, where n = 3**
r = 2
p = 3
b = 0011
q = [2, 3]
**Third loop pass, where n = 2**
r = 0
p = 0
b = 1011
q = [2, 3, 0]
** Fourth and final loop pass, where n = 1**
r = 0
p = 1
b = 1111
q = [2, 3, 0, 1]
Please check answers at
Generate sequence of integers in random order without constructing the whole list upfront
and also my answer lies there as
very simple random is 1+((power(r,x)-1) mod p) will be from 1 to p for values of x from 1 to p and will be random where r and p are prime numbers and r <> p.
I asked a similar question before but mine was for the whole range of a int see Looking for a Hash Function /Ordered Int/ to /Shuffled Int/
static std::unordered_set<long> s;
long l = 0;
for(; !l && (s.end() != s.find(l)); l = generator());
v.insert(l);
generator() being your random number generator. You roll numbers as long as the entry is not in your set, then you add what you find in it. You get the idea.
I did it with long for the example, but you should make that a template if your PRNG is templatized.
Alternative is to use a cryptographically secure PRNG that will have a very low probability to generate twice the same number.
If you don't mean poor statisticall properties of generated sequence, there is one method:
Let's say you want to generate N numbers, each of 1024 bits each. You can sacrifice some bits of generated number to be "counter".
So you generate each random number, but into some bits you choosen you put binary encoded counter (from variable, you increase each time next random number is generated).
You can split that number into single bits and put it in some of less significant bits of generated number.
That way you are sure you get unique number each time.
I mean for example each generated number looks like that:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyxxxxyxyyyyxxyxx
where x is take directly from generator, and ys are taken from counter variable.
Mersenne twister
Description of which can be found here on Wikipedia: Mersenne twister
Look at the bottom of the page for implementations in various languages.
The problem is to select a "random" sequence of N unique numbers from the range 1..M where there is no constraint on the relationship between N and M (M could be much bigger, about the same, or even smaller than N; they may not be relatively prime).
Expanding on the linear feedback shift register answer: for a given M, construct a maximal LFSR for the smallest power of two that is larger than M. Then just grab your numbers from the LFSR throwing out numbers larger than M. On average, you will throw out at most half the generated numbers (since by construction more than half the range of the LFSR is less than M), so the expected running time of getting a number is O(1). You are not storing previously generated numbers so space consumption is O(1) too. If you cycle before getting N numbers then M less than N (or the LFSR is constructed incorrectly).
You can find the parameters for maximum length LFSRs up to 168 bits here (from wikipedia): http://www.xilinx.com/support/documentation/application_notes/xapp052.pdf
Here's some java code:
/**
* Generate a sequence of unique "random" numbers in [0,M)
* #author dkoes
*
*/
public class UniqueRandom
{
long lfsr;
long mask;
long max;
private static long seed = 1;
//indexed by number of bits
private static int [][] taps = {
null, // 0
null, // 1
null, // 2
{3,2}, //3
{4,3},
{5,3},
{6,5},
{7,6},
{8,6,5,4},
{9,5},
{10,7},
{11,9},
{12,6,4,1},
{13,4,3,1},
{14,5,3,1},
{15,14},
{16,15,13,4},
{17,14},
{18,11},
{19,6,2,1},
{20,17},
{21,19},
{22,21},
{23,18},
{24,23,22,17},
{25,22},
{26,6,2,1},
{27,5,2,1},
{28,25},
{29,27},
{30,6,4,1},
{31,28},
{32,22,2,1},
{33,20},
{34,27,2,1},
{35,33},
{36,25},
{37,5,4,3,2,1},
{38,6,5,1},
{39,35},
{40,38,21,19},
{41,38},
{42,41,20,19},
{43,42,38,37},
{44,43,18,17},
{45,44,42,41},
{46,45,26,25},
{47,42},
{48,47,21,20},
{49,40},
{50,49,24,23},
{51,50,36,35},
{52,49},
{53,52,38,37},
{54,53,18,17},
{55,31},
{56,55,35,34},
{57,50},
{58,39},
{59,58,38,37},
{60,59},
{61,60,46,45},
{62,61,6,5},
{63,62},
};
//m is upperbound; things break if it isn't positive
UniqueRandom(long m)
{
max = m;
lfsr = seed; //could easily pass a starting point instead
//figure out number of bits
int bits = 0;
long b = m;
while((b >>>= 1) != 0)
{
bits++;
}
bits++;
if(bits < 3)
bits = 3;
mask = 0;
for(int i = 0; i < taps[bits].length; i++)
{
mask |= (1L << (taps[bits][i]-1));
}
}
//return -1 if we've cycled
long next()
{
long ret = -1;
if(lfsr == 0)
return -1;
do {
ret = lfsr;
//update lfsr - from wikipedia
long lsb = lfsr & 1;
lfsr >>>= 1;
if(lsb == 1)
lfsr ^= mask;
if(lfsr == seed)
lfsr = 0; //cycled, stick
ret--; //zero is stuck state, never generated so sub 1 to get it
} while(ret >= max);
return ret;
}
}
Here is a way to random without repeating results. It also works for strings. Its in C# but the logig should work in many places. Put the random results in a list and check if the new random element is in that list. If not than you have a new random element. If it is in that list, repeat the random until you get an element that is not in that list.
List<string> Erledigte = new List<string>();
private void Form1_Load(object sender, EventArgs e)
{
label1.Text = "";
listBox1.Items.Add("a");
listBox1.Items.Add("b");
listBox1.Items.Add("c");
listBox1.Items.Add("d");
listBox1.Items.Add("e");
}
private void button1_Click(object sender, EventArgs e)
{
Random rand = new Random();
int index=rand.Next(0, listBox1.Items.Count);
string rndString = listBox1.Items[index].ToString();
if (listBox1.Items.Count <= Erledigte.Count)
{
return;
}
else
{
if (Erledigte.Contains(rndString))
{
//MessageBox.Show("vorhanden");
while (Erledigte.Contains(rndString))
{
index = rand.Next(0, listBox1.Items.Count);
rndString = listBox1.Items[index].ToString();
}
}
Erledigte.Add(rndString);
label1.Text += rndString;
}
}
For a sequence to be random there should not be any auto correlation. The restriction that the numbers should not repeat means the next number should depend on all the previous numbers which means it is not random anymore....
If you can generate 'small' random numbers, you can generate 'large' random numbers by integrating them: add a small random increment to each 'previous'.
const size_t amount = 100; // a limited amount of random numbers
vector<long int> numbers;
numbers.reserve( amount );
const short int spread = 250; // about 250 between each random number
numbers.push_back( myrandom( spread ) );
for( int n = 0; n != amount; ++n ) {
const short int increment = myrandom( spread );
numbers.push_back( numbers.back() + increment );
}
myshuffle( numbers );
The myrandom and myshuffle functions I hereby generously delegate to others :)
to have non repeated random numbers and to avoid waistingtime with checking for doubles numbers and get new numbers over and over use the below method which will assure the minimum usage of Rand:
for example if you want to get 100 non repeated random number:
1. fill an array with numbers from 1 to 100
2. get a random number using Rand function in the range of (1-100)
3. use the genarted random number as an Index to get th value from the array (Numbers[IndexGeneratedFromRandFunction]
4. shift the number in the array after that Index to the left
5. repeat from step 2 but now the the rang should be (1-99) and go on
now we have a array with different numbers!
int main() {
int b[(the number
if them)];
for (int i = 0; i < (the number of them); i++) {
int a = rand() % (the number of them + 1) + 1;
int j = 0;
while (j < i) {
if (a == b[j]) {
a = rand() % (the number of them + 1) + 1;
j = -1;
}
j++;
}
b[i] = a;
}
}