Uniform Spread of boolean values in a vector - c++

I have a bool vector such as:
vector<bool> bVec;
And I fill it with random 1's and 0's within a loop using pushbacks:
bVec.push_back(0);
bVec.push_back(1);
I can shuffle the contents:
random_shuffle(bVec.begin(), bVec.end());
Which works fine for a randomly shuffled vector, however, I if want uniformly distributed values I can't seem to get a proper vector.
ie I want to count the number of 1's and 0's and spread them out as uniformly as possible. For example if I have 3 1's and 7 0's I would want something like
0,1,0,0,1,0,0,1,0,0 (or similar)
Writing my own function has proven to be fiddly and time consuming and prone to bugs. Is there a function out there that I have not been able to find that will do this?
Thanks.

In case anyone gets to this and needs an answer (instead of arguing about semantics like in the comments above) I solved the problem as follows, step by step explained in pseudocode:
1) create a vector, randomly shuffle values in, count the 1's and use that as a desired amount and then set it all back to 0's. Now you have a desired number of 1's as a target to spread uniformly within the vector.
2) Use a C++ implementation of matlabs linspace function (a few variations can be found here: https://gist.github.com/jmbr/2375233). When you enter how many 1's you want and the size of the vector it returns equidistant points that you can round down to create indexes in your vector.
3) Set those points as 1's.
The result is a perfectly spaced vector.

Related

Efficiently storing a matrix with many zeros, dynamically

Background:
I'm working in c++.
I recall there being a method to efficiently (memory-wise) store "arrays" (where an array might be made of std::vector's, std::set's, etc... I don't care how, so long as it is memory efficient and I'm able to check the value of each element) of 0's and 1's (or, equivalently, truth/false, etc), wherein there is a disproportionate number of one or the other (e.g. mostly zeroes).
I've written an algorithm, which populates an "array" (currently, a vector<vector<size_t>>) with 0's and 1's according to some function. For these purposes, we can more-or-less consider it as being done randomly. The array is to be quite large (of variable size... on the order of 1000 columns, and 1E+8 or more rows), and always rectangular.
There need be this many data points. In the best of times, my machine becomes quickly resource constrained and slows to a crawl. At worst, I get std::bad_alloc.
Putting aside what I intend to do with this array, what is the most efficient (memory-wise) way to store a rectangular array of 1's and 0's (or T/F, etc), where there are mostly 1's or 0's (and I know which is most populous)?.
Note that the array need be created "dynamically" (i.e. one element at a time), elements must maintain their location, and I need only to check the value of individual elements after creation. I'm concerned about memory footprint, nothing else.
This is known as a sparse array or matrix.
std::set<std::pair<int,int>> bob;
If you want 7,100 to be 1, just bob.insert({7,100});. Missing elements are 0. You can use bob.count({3,7}) for a 0/1 value if you like.
Now looping over both columns are rows is tricky; easiest is to make 2 sets each backwards.
If you have no need to loop in order, use an unordered set instead.

Random pairs from two lists

My question is similar to this one.
I have two lists: X with n elements and Y with m elements - let's say they hold row and column indices for a n x m matrix A. Now, I'd like to write something to k random places in matrix A.
I thought of two solutions:
Get a random element x from X and a random element y from Y. Check if something is already written to A[x][y] and if not - write. But if k is close to m*n I can shoot like this for ever.
Create an m*n array with all possible combinations of indices, shuffle it, draw first k elements and write there. But the problem I see here is that if both n and m are very big, the newly created n*m array may be huge (and shuffling may take some time too).
Karoly Horvath suggested to combine the two. I guess I'd have to pick threshold t and:
.
if( k/(m*n) > t ){
use option 2.
}else{
use option 1.
}
Any advice on how to pick t then?
Are there any other (better) approaches I missed?
There's an elegant algorithm due to Floyd for sampling without replacement from a range of integers. You can map the resulting integers in [0, n*m) to coordinates by the C++ function [m](int i) { return std::make_pair(i / m, i % m); }.
The best approach depends on how full your resulting matrix will be.. If you are going to fill more than half of it your rate of collision (aka getting random spot that is already "written" to) is going to be high and will cause you to loop a lot more than you would want.
I would not generate all possibilities, but instead I would build it as you go using a lists of lists. One for all possible values of X and from that a list of possible values of Y. I would initialize the X list but not the Y ones.
every time you pick a value of x for the first time you create a dictionary or list of m elements, then remove the one you use. then next time you pick x you will have m-1 elements, once a X value runs out of elements then remove it from the list so it does not get picked again.. this way you can guarantee never to pick a occupied space again, and you do not need to generate n*m possible options.
You have n x m elements, e.g. 200 elements for a 10 x 20 matrix. Picking one out of 200 should be easy. Point is, whatever you do, you can flatten the two dimensions into one, reducing that part of the problem.
Notes:
Use floor divide and modulo operations to get row and column out of the index.
Blacklist: Store the picked index in a set to quickly skip those that were already written.
Whitelist: Store the indices that are not yet picked in a set. If this is better than blacklisting depends on how full your set is.
Using the right container type for the set might come important, it doesn't have to be std::set. For the blacklist, you only need fast lookup and fast insertion, a vector<bool> might actually work pretty well. For the whitelist, you need fast random access and fast deletion, a vector<unsigned> with the remaining indices would be a good choice.
Prepare to switch between either method depending on the circumstances.
for a nxm matrix, you can consider [0..n*m-1] the indexes for the matrix elements.
Filling in a random index is rather trivial, just generate a random number between 0 and n*m-1, and that is the position to be filled.
Subsequently doing this operation can be a little more tricky:
you can test weather you have already written something to a position and regenerate the random number; but as you fill the matrix you will have a larger number of index regeneration.
a better solution is to put all the indexes in a vector of n*m elements. As you generate an index, you remove it from the list and next time generate a random index between 0 and N-1
example:
vector<int> indexVec;
for (i=0;i<n*m;i++)
indexVec.push_back(i);
nrOfIndexes = n*m-1;
while (nrOfIndexes>1)
{
index = rand()% nrOfIndexes;
processMatrixLocation(index);
indexVec.erase(indexVec.begin()+index);
nrOfIndexes--;
}
processMatrixLocation(indexVec[0]);

How to k-shuffle an array with STL?

I am testing an algorithm that sorts a k-sorted array (every element is at most k-positions away from its correct sorted position).
I am having a hard time generating test data -- I can't randomly swap elements by k-positions because I may end up swapping an element twice. I could track which elements I swapped but then I need O(N) space. I could also use a random-heap of size k + 1, but that sounds silly.
Is there anything built into the STL that can help me with this? This seems like a common problem, but my brief research only turned up algorithms for total shuffles (I think STL implements Fisher-Yates).
It feels odd problem since preparing random test data does not need to be ultra efficient, also the data may be usually whatever. You can have the test values as correct positions of those elements or pairs that give range of correct positions. For example array of pairs:
1,1
2,4
2,4
2,4
5,6
5,6
7,7
...
Store the state of random generator somewhere.
Choose two random elements whose position is not more than k positions away of original position (or range) of other and swap. Repeat that N times and your test data is ready.
If you need to get same sequence later then restore the random generator state and repeat the algorithm.

Fastest way to copy into an array random elements from another array C++

The title almost tells everything,but I will exemplify this: suppose that you have an array a of chars, and another array b also of chars. Is there a better way to put in a only the char located at prime positions in b? Suppose that we have an array with prime positions.
For now my naive code looks like this.
for(i = 0; i < n; i++)
a[i] = b[j + prime[i]];
Here prime[i] stores the prime positions of b and b is much larger than a,j is an arbitrary position in b(there will not be an out of bound problem because j+prime[i] does not exceed border of b).
What is better? One way is: If the prime[] locations are known at compile time, then we could add a prefetch to get the cache lines in ahead of time.
This is making the memory access time better.
You can either do this when you read (or copy) values into the array, using a prime function that tells you if a number is prime or not.
A way I sketched quickly is to generate prime numbers until they reach your array capacity and simply iterate through them and copy the desired elements from your a array. I can think of several ways of optimizing this, such as having a "preprocess" function that generates prime numbers in your program so you can reuse the list.
The prime number list will get cached and it will take a lot less time to be accessed(it s unlikely that you have an extremely huge prime number list)
Let's look at this from an algorithmic perspective.
You want to perform a hash function on each of the entries in array A. Assuming that you know nothing about the state of the items in array A, then that places the lower bound of run time for the algorithm at O(n), linear time. You must iterate through every member because you don't have any more information that could assist you in "skipping" some elements or optimizing the process.
That said, the challenge then becomes keeping the algorithm down at O(n). The code you demonstrate does do this, assuming you then follow up with copying the non-prime numbers in the same manner. So for the copying step, no there is not a way to make this any faster from an algorithm point of view. That doesn't mean that how you perform the hashing step won't affect the speed, though.

How to ensure that randomly generated numbers are not being repeated? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Unique (non-repeating) random numbers in O(1)?
How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N
I want to generate random number in a certain diapason, and I must be sure, that each new number is not a duplicate of formers. One solution is to store formerly generated numbers in a container and each new number checks aginst the container. If there is such number in the container, then we generate agin, else we use and add it to the container. But with each new number this operation is becoming slower and slower. Is there any better approach, or any rand function that can work faster and ensure uniqueness of the generation?
EDIT: Yes, there is a limit (for example from 0 to 1.000.000.000). But I want to generate 100.000 unique numbers! (Would be great if the solution will be by using Qt features.)
Is there a range for the random numbers? If you have a limit for random numbers and you keep generating unique random numbers, then you'll end up with a list of all numbers from x..y in random order, where x-y is the valid range of your random numbers. If this is the case, you might improve speed greatly by simply generating the list of all numbers x..y and shuffling it, instead of generating the numbers.
I think there are 3 possible approaches, depending on range-size, and performance pattern needed you can use another algorithm.
Create a random number, see if it is in (a sorted) list. If not add and return, else try another.
Your list will grow and consume memory with every number you need. If every number is 32 bit, it will grow with at least 32 bits every time.
Every new random number increases the hit-ratio and this will make it slower.
O(n^2) - I think
Create an bit-array for every number in the range. Mark with 1/True if already returned.
Every number now only takes 1 bit, this can still be a problem if the range is big, but every number now only allocates 1 bit.
Every new random number increases the hit-ratio and this will make it slower.
O(n*2)
Pre-populate a list with all the numbers, shuffle it, and return the Nth number.
The list will not grow, returning numbers will not get slower,
but generating the list might take a long time, and a lot of memory.
O(1)
Depending on needed speed, you could store all lists in a database. There's no need for them to be in memory except speed.
Fill out a list with the numbers you need, then shuffle the list and pick your numbers from one end.
If you use a simple 32-bit linear congruential RNG (such as the so-called "Minimal Standard"), all you have to do is store the seed value you use and compare each generated number to it. If you ever reach that value again, your sequence is starting to repeat itself and you're out of values. This is O(1), but of course limited to 2^32-1 values (though I suppose you could use a 64-bit version as well).
There is a class of pseudo-random number generators that, I believe, has the properties you want: the Linear congruential generator. If defined properly, it will produce a list of integers from 0 to N-1, with no two numbers repeating until you've used all of the numbers in the list once.
#include <stdint.h>
/*
* Choose these values as follows:
*
* The MODULUS and INCREMENT must be relatively prime.
* The MULTIPLIER-1 must be divisible by all prime factors of the MODULUS.
* The MULTIPLIER-1 must be divisible by 4, if the MODULUS is divisible by 4.
*
* In addition, modulus must be <= 2**32 (0x0000000100000000ULL).
*
* A small example would be 8, 5, 3.
* A larger example would be 256, 129, 251.
* A useful example would be 0x0000000100000000ULL, 1664525, 1013904223.
*/
#define MODULUS (0x0000000100000000ULL)
#define MULTIPLIER (1664525)
#define INCREMENT (1013904223)
static uint64_t seed;
uint32_t lcg( void ) {
uint64_t temp;
temp = seed * MULTIPLIER + INCREMENT; // 64-bit intermediate product
seed = temp % MODULUS; // 32-bit end-result
return (uint32_t) seed;
}
All you have to do is choose a MODULUS such that it is larger than the number of numbers you'll need in a given run.
It wouldn't be random if there is such a pattern?
As far as I know you would have to store and filter all unwanted numbers...
unsigned int N = 1000;
vector <unsigned int> vals(N);
for(unsigned int i = 0; i < vals.size(); ++i)
vals[i] = i;
std::random_shuffle(vals.begin(), vals.end());
unsigned int random_number_1 = vals[0];
unsigned int random_number_2 = vals[1];
unsigned int random_number_3 = vals[2];
//etc
You could store the numbers in a vector, and get them by index (1..n-1). After each random generation, remove the indexed number from the vector, then generate the next number in the interval 1..n-2. etc.
If they can't be repeated, they aren't random.
EDIT:
Furthermore..
if they can't be repeated, they don't fit in a finite computer
How many random numbers do you need? Maybe you can apply a shuffle algorithm to a precalculated array of random numbers?
There is no way a random generator will output values depending on previously outputted values, because they wouldn't be random. However, you can improve performance by using different pools of random values each with values combined by a different salt value, which will divide the quantity of numbers to check by the quantity of pools you have.
If the range of the random number doesn't matter you could use a really large range of random numbers and hope you don't get any collisions. If your range is billions of times larger than the number of elements you expect to create your chances of a collision are small but still there. If the numbers don't to have an actual random distribution you could have a two part number {counter}{random x digits} that would ensure a unique number but it wouldn't be randomly distributed.
There's not going to be a pure functional approach that isn't O(n^2) on the number of results returned so far - every time a number is generated you will need to check against every result so far. Additionally, think about what happens when you're returning e.g. the 1000th number out of 1000 - you will require on average 1000 tries until the random algorithm comes up with the last unused number, with each attempt requiring an average of 499.5 comparisons with the already-generated numbers.
It should be clear from this that your description as posted is not quite exactly what you want. The better approach, as others have said, is to take a list of e.g. 1000 numbers upfront, shuffle it, and then return numbers from that list incrementally. This will guarantee you're not returning any duplicates, and return the numbers in O(1) time after the initial setup.
You can allocate enough memory for array of bits with 1 bit for each possible number. and check/set bits for every generated number. for example for numbers from 0 to 65535 you will need only 8192 (8kb) of memory.
Here's an interesting solution I came up with:
Assume you have numbers 1 to 1000 - and you don't have enough memory.
You could put all 1000 numbers into an array, and remove them one by one, but you'll get memory overflow error.
You could split the array in two, so you have an array of 1-500 and one empty array
You could then check if the number exists in array 1, or doesn't exist in the second array.
So assuming you have 1000 numbers, you can get a random number from 1-1000. If its less than 500, check array 1 and remove it if present. If it's NOT in array 2, you can add it.
This halves your memory usage.
If you propogate this using recursion, you can split your 500 array into a 250 and empty array.
Assuming empty arrays use no space, you can decrease your memory usage quite a bit.
Searching will be massively faster too, because if you break it down a lot, you generate a number such as 29. It's less than 500, less than 250, less than 125, less than 62, less than 31, greater than 15, so you do those 6 calculations, then check the array containing an average of 16/2 items - 8 in total.
I should patent this search, although I bet it already exists!
Especially given the desired number of values, you want a Linear Feedback Shift Register.
Why?
No shuffle step, nor a need to keep track of values you've already hit. As long as you go less than the full period, you should be fine.
It turns out that the Wikipedia article has some C++ code examples which are more tested than anything I would give you off the top of my head. Note that you'll want to be pulling values from inside the loops -- the loops just iterate the shift register through. You can see this in the snippet here.
(Yes, I know this was mentioned, briefly in the dupe -- saw it as I was revising. Given it hasn't been brought up here and is the best way to solve the poster's question, I think it should be brought up again.)
Let's say size=100.000 then create an array with this size. Create random numbers then put them into array.Problem is which index that number will be ? randomNumber%size will give you index.
When u put next number, use that function for index and check this value is exist or not. If not exist put it if exist then create new number and try that. U can create in fastest way with this way. Disadvange of this way is you will never find numbers which last section is same.
For example for last sections is
1231232444556
3458923444556
you will never have such numbers in your list even if they are totally different but last sections are same.
First off, there's a huge difference between random and pseudorandom. There's no way to generate perfectly random numbers from a deterministic process (such as a computer) without bringing in some physical process like latency between keystrokes or another entropy source.
The approach of saving all the numbers generated will slow down the computation rather quickly; the more numbers you have, the larger your storage needs, until you've filled up all available memory. A better method would be (as someone's already suggested) using a well known pseudorandom number generator such as the Linear Congruential Generator; it's super fast, requiring only modular multiplication and addition, and the theory behind it gets a lot of mention in Vol. 2 of Knuth's TAOCP. That way, the theory involved guarantees a rather large period before repetition, and the only storage needed are the parameters and seed used.
If you have no problem when a value can be calculated by the previous one, LFSR and LCG are fine. When you don't want that one output value can be calculated by another, you can use a block cipher in counter mode to generate the output sequence, given that the cipher block length is equal to the output length.
Use Hashset generic class . This class does not contain same values. You can put in all of your generated numbers then u can use them in Hashset.You can also check it if it is exist or not .Hashset can determine existence of items in fastest way.Hashset does not slow when list become bigger and this is biggest feature of it.
For example :
HashSet<int> array = new HashSet<int>();
array.Add(1);
array.Add(2);
array.Add(1);
foreach (var item in array)
{
Console.WriteLine(item);
}
Console.ReadKey();