Find n-th set of a powerset - c++

I'm trying to find the n-th set in a powerset. By n-th I mean that the powerset is generated in the following order -- first by the size, and then, lexicographically --, and so, the indices of the sets in the powerset of [a, b, c] is:
0 - []
1 - [a]
2 - [b]
3 - [c]
4 - [a, b]
5 - [a, c]
6 - [b, c]
7 - [a, b, c]
While looking for a solution, all I could find was an algorithm to return the n-th permutation of a list of elements -- for example, here.
Context:
I'm trying to retrieve the entire powerset of a vector V of elements, but I need to do this with one set at a time.
Requirements:
I can only maintain two vectors at the same time, the first one with the original items in the list, and the second one with the n-th set from the powerset of V -- that's why I'm willing to have an n-th set function here;
I need this to be done not in linear time on the space of solutions -- which means it cannot list all the sets and them pick the n-th one;
my initial idea is to use bits to represent the positions, and get a valid mapping for what I need -- as the "incomplete" solution I posted.

I don't have a closed form for the function, but I do have a bit-hacking non-looping next_combination function, which you're welcome to, if it helps. It assumes that you can fit the bit mask into some integer type, which is probably not an unreasonable assumption given that there are 264 possibilities for the 64-element set.
As the comment says, I find this definition of "lexicographical ordering" a bit odd, since I'd say lexicographical ordering would be: [], [a], [ab], [abc], [ac], [b], [bc], [c]. But I've had to do the "first by size, then lexicographical" enumeration before.
// Generate bitmaps representing all subsets of a set of k elements,
// in order first by (ascending) subset size, and then lexicographically.
// The elements correspond to the bits in increasing magnitude (so the
// first element in lexicographic order corresponds to the 2^0 bit.)
//
// This function generates and returns the next bit-pattern, in circular order
// (so that if the iteration is finished, it returns 0).
//
template<typename UnsignedInteger>
UnsignedInteger next_combination(UnsignedInteger comb, UnsignedInteger mask) {
UnsignedInteger last_one = comb & -comb;
UnsignedInteger last_zero = (comb + last_one) &~ comb & mask;
if (last_zero) return comb + last_one + (last_zero / (last_one * 2)) - 1;
else if (last_one > 1) return mask / (last_one / 2);
else return ~comb & 1;
}
Line 5 is doing the bit-hacking equivalent of the (extended) regular expression replacement, which finds the last 01 in the string, flips it to 10 and shifts all the following 1s all the way to the right.
s/01(1*)(0*)$/10\2\1/
Line 6 does this one (only if the previous one failed) to add one more 1 and shift the 1s all the way to the right:
s/(1*)0(0*)/\21\1/
I don't know if that explanation helps or hinders :)
Here's a quick and dirty driver (the command-line argument is the size of the set, default 5, maximum the number of bits in an unsigned long):
#include <iostream>
template<typename UnsignedInteger>
std::ostream& show(std::ostream& out, UnsignedInteger comb) {
out << '[';
char a = 'a';
for (UnsignedInteger i = 1; comb; i *= 2, ++a) {
if (i & comb) {
out << a;
comb -= i;
}
}
return out << ']';
}
int main(int argc, char** argv) {
unsigned int n = 5;
if (argc > 1) n = atoi(argv[1]);
unsigned long mask = (1UL << n) - 1;
unsigned long comb = 0;
do {
show(std::cout, comb) << std::endl;
comb = next_combination(comb, mask);
} while (comb);
return 0;
}
It's hard to believe that this function might be useful for a set of more than 64 elements, given the size of the enumeration, but it might be useful to enumerate some limited part, such as all subsets of three elements. In this case, the bit-hackery is only really useful if the modification fits in a single word. Fortunately, that's easy to test; you simply need to do the computation as above on the last word in the bitset, up to the test for last_zero being zero. (In this case, you don't need to bitand mask, and indeed you might want to choose a different way of specifying the set size.) If last_zero turns out to be zero (which will actually be pretty rare), then you need to do the transformation in some other way, but the principle is the same: find the first 0 which precedes a 1 (watch out for the case where the 0 is at the end of a word and the 1 at the beginning of the next one); change the 01 to 10, figure out how many 1s you need to move, and move them to the end.

Considering a list of elements L = [a, b, c], the powerset of L is given by:
P(L) = {
[],
[a], [b], [c],
[a, b], [a, c], [b, c],
[a, b, c]
}
Considering each position as a bit, you'd have the mappings:
id | positions - integer | desired set
0 | [0 0 0] - 0 | []
1 | [1 0 0] - 4 | [a]
2 | [0 1 0] - 2 | [b]
3 | [0 0 1] - 1 | [c]
4 | [1 1 0] - 6 | [a, b]
5 | [1 0 1] - 5 | [a, c]
6 | [0 1 1] - 3 | [b, c]
7 | [1 1 1] - 7 | [a, b, c]
As you see, the id is not directly mapped to the integers. A proper mapping needs to be applied, so that you have:
id | positions - integer | mapped - integer
0 | [0 0 0] - 0 | [0 0 0] - 0
1 | [1 0 0] - 4 | [0 0 1] - 1
2 | [0 1 0] - 2 | [0 1 0] - 2
3 | [0 0 1] - 1 | [0 1 1] - 3
4 | [1 1 0] - 6 | [1 0 0] - 4
5 | [1 0 1] - 5 | [1 0 1] - 5
6 | [0 1 1] - 3 | [1 1 0] - 6
7 | [1 1 1] - 7 | [1 1 1] - 7
As an attempt on solving this, I came up using a binary tree to do the mapping -- I'm posting it so that someone may see a solution from it:
#
______________|_____________
a / \
_____|_____ _______|______
b / \ / \
__|__ __|__ __|__ __|__
c / \ / \ / \ / \
[ ] [c] [b] [b, c] [a] [a, c] [a, b] [a, b, c]
index: 0 3 2 6 1 5 4 7

Suppose your set has size N.
So, there are (N choose k) sets of size k. You can find the right k (i.e. the size of the nth set) very quickly just by subtracting off (N choose k) from n until n is about to go negative. This reduces your problem to finding the nth k-subset of an N-set.
The first (N-1 choose k-1) k-subsets of your N-set will contain its least element. So, if n is less than (N-1 choose k-1), pick the first element and recurse on the rest of the set. Otherwise, you have one of the (N-1 choose k) other sets; throw away the first element, subtract (N-1 choose k-1) from n, and recurse.
Code:
#include <stdio.h>
int ch[88][88];
int choose(int n, int k) {
if (n<0||k<0||k>n) return 0;
if (!k||n==k) return 1;
if (ch[n][k]) return ch[n][k];
return ch[n][k] = choose(n-1,k-1) + choose(n-1,k);
}
int nthkset(int N, int n, int k) {
if (!n) return (1<<k)-1;
if (choose(N-1,k-1) > n) return 1 | (nthkset(N-1,n,k-1) << 1);
return nthkset(N-1,n-choose(N-1,k-1),k)<<1;
}
int nthset(int N, int n) {
for (int k = 0; k <= N; k++)
if (choose(N,k) > n) return nthkset(N,n,k);
else n -= choose(N,k);
return -1; // not enough subsets of [N].
}
int main() {
int N,n;
scanf("%i %i", &N, &n);
int a = nthset(N,n);
for (int i=0;i<N;i++) printf("%i", !!(a&1<<i));
printf("\n");
}

Related

How can I create such list in Haskell using list comprehension

So I need to create such list
[2,4,5,8,9,10,11,16,17,18,19,20,21,22,23,32 ..]
The pattern goes as follows:
2^1,2^2, 2^2 +1, 2^3, 2^3 +1, 2^3 +2, 2^3 +3 .. So the number of repeats of (2^n +1, 2^n +2 .. is also doubling with each go ) I hope you got the point.
I can create such list using functions in Haskell but I was interested whether or not it is possible to do it using solely list Comprehension
EDIT: Some people asked me to demonstrate a functional approach to this problem. Here it is
rep _ 0 = []
rep a b = a : rep (a+1) (b-1)
createlist a = rep (2^(a+1)) (2^a) ++ createlist (a+1))
So if we say `take 50 (createlist 0) the results would be
[2,4,5,8,9,10,11,16,17,18,19,20,21,22,23,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82]
So you always need to call the function with initial parameter 0. It is really a nasty solution I would like to make it easier.
Based on your example, the list looks like:
2
4 5
8 9 10 11
16 17 18 19 20 21 22 23
32 33 34 35 36 37 38 39 40 41 ...
So for every i from 1 to infinity, we yield the elements in the range [2i,2i+2i-1). We can write this directly into list comprehension:
[ j | i <- [1..], j <- [2^i .. 2^i + 2^(i-1) - 1] ]
We can also let i take powers of two, and yield elements between i, and div (3*i) 2 (exclusive), so:
[ j | i <- iterate (2*) 2, j <- [i .. div (2*i) 3 - 1] ]
We can turn that also into a list monad, like:
iterate (*2) 2 >>= \i -> [i..div (3*i) 2 - 1]
or more point-free (and point-less):
import Control.Monad(ap)
iterate (*2) 2 >>= ap enumFromTo (pred . flip div 2 . (3 *))
One could try to write the ith term of the list using a function f(i), where i >= 0
The overall infinite list can be represented as
L_0 ++ L_1 ++ L_2 ++ ...
where each L_n is a finite list of the form
L_n = [ 2^(n+1), 2^(n+1) + 1, ..., 2^(n+1) + (2^n - 1) ]
The size of L_n is 2^n and we know that for any k, 2^0 + 2^1 + ... + 2^k = 2^(k+1) - 1 (it's a geometric progression) so if we're asked to find which finite list the ith term of the infinite list is in, we can find the highest integer m for which i >= 2^m - 1. Once that's done, we can safely say the ith term is in L_m. We can also say that the ith term of the infinite list is the (i - 2^m + 1)th element of L_m.
This allows us to define the final sequence (let's call it thatList) as
thatList :: [Int]
thatList = [ f i | i <- [0..] ]
and
f :: Int -> Int
f i = (2 ^ (m + 1)) + (i - (2 ^ m) + 1)
where
m = floor (logBase 2 (fromIntegral i + 1))

Find maximum length of good path in a grid

Given is a N*N grid.Now we need to find a good path of maximum length , where good path is defined as follow :
Good path always start from a cell marked as 0
We are only allowed to move Left,Right,Up Or Down
If the value of ith cell is say A, then value of next cell in the path must be A+1.
Now given these few conditions, I need to find out the length of maximum path that can be made. Also I need to count such paths that are of maximum length.
Example : Let N=3 and we have 3*3 matrix as follow :
0 3 2
3 0 1
2 1 0
Then maximum good path length here is 3 and the count of such good paths is 4.
0 3 2
3 0 1
2 1 0
0 3 2
3 0 1
2 1 0
0 3 2
3 0 1
2 1 0
0 3 2
3 0 1
2 1 0
This problem is a variation of Longest Path Problem, however your restrictions make this problem much easier, since the graph is actually a Directed Acyclic Graph (DAG), and thus the problem is solveable efficiently.
Define the directed graph G=(V,E) as following:
V = { all cells in the matrix} (sanity check: |V| = N^2)
E = { (u,v) | u is adjacent to v AND value(u) + 1 = value(v) }
Note that the resulting graph from the above definition is a DAG, because you cannot have any cycles, since it will result in having some edge e= (u,v) such that value(u) > value(v).
Now, you only need to find longest path in a DAG from any starting point. This is done by topological sort on the graph, and then using Dynamic Programming:
init:
for every source in the DAG:
D(v) = 0 if value(v) = 0
-infinity otherwise
step:
for each node v from first to last (according to topological sort)
D(v) = max{D(u) + 1 | for each edge (u,v) }
When you are done, find the node v with maximal value D(v), this is the length of the longest "good path".
Finding the path itself is done by rerolling the above, retracing your steps back from the maximal D(v) until you reach back the initial node with value 0.
Complexity of this approach is O(V+E) = O(n^2)
Since you are looking for the number of longest paths, you can modify this solution a bit to count the number of paths reached to each node, as follows:
Topological sort the nodes, let the sorted array be arr (1)
For each node v from start to end of arr:
if value(v) = 0:
set D(v) = 1
else
sum = 0
for each u such that (u,v) is an edge: (2)
sum = sum + D(u)
D(v) = sum
The above will find you for each node v the number of "good paths" D(v) that reaches it. All you have to do now, is find the maximal value x that has sum node v such that value(v) = x and D(v) > 0, and sum the number of paths reaching any node with value(v):
max = 0
numPaths = 0
for each node v:
if value(v) == max:
numPaths = numPaths + D(v)
else if value(v) > max AND D(v) > 0:
numPaths = D(v)
max = value(v)
return numPaths
Notes:
(1) - a "regular" sort works here to, but it will take O(n^2logn) time, and topological sort takes O(n^2) time
(2) Reminder, (u,v) is an edge if: (1) u and v are adjacent (2) value(u) + 1 = value(v)
You can do this with a simple Breadth-First Search.
First find all cells marked 0. (This is O(N2).) On each such cell put a walker. Each walker carries a number 'p' initialized to 1.
Now iterate:
All walkers stand on cells with the same number k. Each walker looks for neighboring cells (left, right, up or down) marked with k+1.
If no walker sees such a cell, the search is over. The length of the longest path is k, and the number of such paths is the sum of the p's of all the walkers.
If some walkers see such numbers, kill any walkers that don't.
Each walker moves into a good neighboring cell. If a walker sees more than one good cell, it divides into as many walkers as there are good cells, and one goes into each. (Each "child" has the same p value its "parent" had.) If two or more walkers meet in the same cell (i.e. if more than one path led to that cell) then they combine into a single walker, whose 'p' value is the sum of their 'p' values.
This algorithm is O(N2), since no cell can be visited more than once, and the number of walkers cannot exceed the number of cells.
I did it using ActionScript, hope it's readable. I think it is working correctly but I may have missed something.
const N:int = 9; // field size
const MIN_VALUE:int = 0; // start value
var field:Array = [];
// create field - not relevant to the task
var probabilities:Array = [0,1,2,3,4,5];
for (var i:int = 0; i < N * N; i++) field.push(probabilities[int(Math.random() * probabilities.length)]);//RANGE));
print_field();
// initial chain fill. We will find any chains of adjacent 0-1 elements.
var chain_list:Array = [];
for (var offset:int = 0; offset < N * N - 1; offset++) {
if (offset < N * N - N) { // y coordinate is not the lowest
var chain:Array = find_chain(offset, offset + N, MIN_VALUE);
if (chain) chain_list.push(chain);
}
if ((offset % N) < N - 1) { // x coordinate is not the rightmost
chain = find_chain(offset, offset + 1, MIN_VALUE);
if (chain) chain_list.push(chain);
}
}
var merged_chain_list:Array = chain_list;
var current_value:int = MIN_VALUE + 1;
// for each found chain, scan its higher end for more attached chains
// and merge them into new chain if found
while(chain_list.length) {
chain_list = [];
for (i = 0; i < merged_chain_list.length; i++) {
chain = merged_chain_list[i];
offset = chain[chain.length - 1];
if (offset < N * N - N) {
var tmp:Array = find_chain(offset, offset + N, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
if (offset > N) {
tmp = find_chain(offset, offset - N, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
if ((offset % N) < N - 1) {
tmp = find_chain(offset, offset + 1, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
if (offset % N) {
tmp = find_chain(offset, offset - 1, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
}
//save the last merged result if any and try the next value
if (chain_list.length) {
merged_chain_list = chain_list;
current_value++;
}
}
// final merged list is a list of chains of a same maximum length
print_chains(merged_chain_list);
function find_chain(offset1, offset2, current_value):Array {
// returns always sorted sorted from min to max
var v1:int = field[offset1];
var v2:int = field[offset2];
if (v1 == current_value && v2 == current_value + 1) return [offset1, offset2];
if (v2 == current_value && v1 == current_value + 1) return [offset2, offset1];
return null;
}
function merge_chains(chain1:Array, chain2:Array):Array {
var tmp:Array = [];
for (var i:int = 0; i < chain1.length; i++) tmp.push(chain1[i]);
tmp.push(chain2[1]);
return tmp;
}
function print_field():void {
for (var pos_y:int = 0; pos_y < N; pos_y++) {
var offset:int = pos_y * N;
var s:String = "";
for (var pos_x:int = 0; pos_x < N; pos_x++) {
var v:int = field[offset++];
if (v == 0) s += "[0]"; else s += " " + v + " ";
}
trace(s);
}
}
function print_chains(chain_list):void {
var cl:int = chain_list.length;
trace("\nchains found: " + cl);
if (cl) trace("chain length: " + chain_list[0].length);
for (var i:int = 0; i < cl; i++) {
var chain:Array = chain_list[i];
var s:String = "";
for (var j:int = 0; j < chain.length; j++) s += chain[j] + ":" + field[chain[j]] + " ";
trace(s);
}
}
Sample output:
1 2 1 3 2 2 3 2 4
4 3 1 2 2 2 [0][0] 1
[0][0] 1 2 4 [0] 3 3 1
[0][0] 5 4 1 1 [0][0] 1
2 2 3 4 3 2 [0] 1 5
4 [0] 3 [0] 3 1 4 3 1
1 2 2 3 5 3 3 3 2
3 4 2 1 2 4 4 4 5
4 2 1 2 2 3 4 5 [0]
chains found: 2
chain length: 5
23:0 32:1 41:2 40:3 39:4
33:0 32:1 41:2 40:3 39:4
I implemented it in my own Lisp dialect, so the source code is not going to help you that much :-) ...
EDIT: Added a Python version too.
anyway the idea is:
write a function paths(i, j) --> (maxlen, number) that returns maximal length of paths starting from (i, j) and how many of them are present..
this function is recursive and looking at neighbors of (i, j) with value M[i][j]+1 will call paths(ni, nj) to get the result for valid neighbors
if the maximal length for a neighbor is bigger than current maximal length you set a new current maximal length and reset the counter
if the maximal length is the same as current then add the counter to the total
if the maximal length is smaller just ignore that neighbor result
cache the result of the computation for the cell (this is very important!). In my version the code is split in two mutually recursive functions: paths that checks the cache first and calls compute-paths otherwise; compute-paths calls paths when processing neighbors. The caching of a recursive call is roughly equivalent to an explicit Dynamic Programming approach, but sometimes easier to implement.
To compute the final result you basically do the same computation but adding up the result for all 0 cells instead of considering neighbors.
Note that the number of different paths can become huge, and that's why enumerating all of them is not a viable option and caching/DP is a must: for example for a N=20 matrix with values M[i][j] = i+j there are 35,345,263,800 maximal paths of length 38.
This algorithm is O(N^2) in time (each cell is visited at most once) and requires O(N^2) space for the cache and for the recursion. Of course you cannot expect to get anything better than this given that the input is composed of N^2 numbers itself and you need at least to read them to compute an answer.
(defun good-paths (matrix)
(let** ((N (length matrix))
(cache (make-array (list N N)))
(#'compute-paths (i j)
(let ((res (list 0 1))
(count (1+ (aref matrix i j))))
(dolist ((ii jj) (list (list (1+ i) j) (list (1- i) j)
(list i (1+ j)) (list i (1- j))))
(when (and (< -1 ii N) (< -1 jj N)
(= (aref matrix ii jj) count))
(let (((maxlen num) (paths ii jj)))
(incf maxlen)
(cond
((< (first res) maxlen)
(setf res (list maxlen num)))
((= (first res) maxlen)
(incf (second res) num))))))
res))
(#'paths (i j)
(first (or (aref cache i j)
(setf (aref cache i j)
(list (compute-paths i j))))))
(res (list 0 0)))
(dotimes (i N)
(dotimes (j N)
(when (= (aref matrix i j) 0)
(let (((maxlen num) (paths i j)))
(cond
((< (first res) maxlen)
(setf res (list maxlen num)))
((= (first res) maxlen)
(incf (second res) num)))))))
res))
Edit
The following is a transliteration of the above in Python, that should be much easier to understand if you never saw Lisp before...
def good_paths(matrix):
N = len(matrix)
cache = [[None]*N for i in xrange(N)] # an NxN matrix of None
def compute_paths(i, j):
maxlen, num = 0, 1
count = 1 + matrix[i][j]
for (ii, jj) in ((i+1, j), (i-1, j), (i, j-1), (i, j+1)):
if 0 <= ii < N and 0 <= jj < N and matrix[ii][jj] == count:
nh_maxlen, nh_num = paths(ii, jj)
nh_maxlen += 1
if maxlen < nh_maxlen:
maxlen = nh_maxlen
num = nh_num
elif maxlen == nh_maxlen:
num += nh_num
return maxlen, num
def paths(i, j):
res = cache[i][j]
if res is None:
res = cache[i][j] = compute_paths(i, j)
return res
maxlen, num = 0, 0
for i in xrange(N):
for j in xrange(N):
if matrix[i][j] == 0:
c_maxlen, c_num = paths(i, j)
if maxlen < c_maxlen:
maxlen = c_maxlen
num = c_num
elif maxlen == c_maxlen:
num += c_num
return maxlen, num

Histogram of the distribution of dice rolls

I saw a question on careercup, but I do not get the answer I want there. I wrote an answer myself and want your comment on my analysis of time complexity and comment on the algorithm and code. Or you could provide a better algorithm in terms of time. Thanks.
You are given d > 0 fair dice with n > 0 "sides", write an function that returns a histogram of the frequency of the result of dice rolls.
For example, for 2 dice, each with 3 sides, the results are:
(1, 1) -> 2
(1, 2) -> 3
(1, 3) -> 4
(2, 1) -> 3
(2, 2) -> 4
(2, 3) -> 5
(3, 1) -> 4
(3, 2) -> 5
(3, 3) -> 6
And the function should return:
2: 1
3: 2
4: 3
5: 2
6: 1
(my sol). The time complexity if you use a brute force depth first search is O(n^d). However, you can use the DP idea to solve this problem. For example, d=3 and n=3. You can use the result of d==1 when computing d==2:
d==1
num #
1 1
2 1
3 1
d==2
first roll second roll is 1
num # num #
1 1 2 1
2 1 -> 3 1
3 1 4 1
first roll second roll is 2
num # num #
1 1 3 1
2 1 -> 4 1
3 1 5 1
first roll second roll is 3
num # num #
1 1 4 1
2 1 -> 5 1
3 1 6 1
Therefore,
second roll
num #
2 1
3 2
4 3
5 2
6 1
The time complexity of this DP algorithm is
SUM_i(1:d) {n*[n(d-1)-(d-1)+1]} ~ O(n^2*d^2)
~~~~~~~~~~~~~~~ <--eg. d=2, n=3, range from 2~6
The code is written in C++ as follows
vector<pair<int,long long>> diceHisto(int numSide, int numDice) {
int n = numSide*numDice;
vector<long long> cur(n+1,0), nxt(n+1,0);
for(int i=1; i<=numSide; i++) cur[i]=1;
for(int i=2; i<=numDice; i++) {
int start = i-1, end = (i-1)*numSide; // range of previous sum of rolls
//cout<<"start="<<start<<" end="<<end<<endl;
for(int j=1; j<=numSide; j++) {
for(int k=start; k<=end; k++)
nxt[k+j] += cur[k];
}
swap(cur,nxt);
for(int j=start; j<=end; j++) nxt[j]=0;
}
vector<pair<int,long long>> result;
for(int i=numDice; i<=numSide*numDice; i++)
result.push_back({i,cur[i]});
return result;
}
You can do it in O(n*d^2). First, note that the generating function for an n-sided dice is p(n) = x+x^2+x^3+...+x^n, and that the distribution for d throws has generating function p(n)^d. Representing the polynomials as arrays, you need O(nd) coefficients, and multiplying by p(n) can be done in a single pass in O(nd) time by keeping a rolling sum.
Here's some python code that implements this. It has one non-obvious optimisation: it throws out a factor x from each p(n) (or equivalently, it treats the dice as having faces 0,1,2,...,n-1 rather than 1,2,3,...,n) which is why d is added back in when showing the distribution.
def dice(n, d):
r = [1] + [0] * (n-1) * d
nr = [0] * len(r)
for k in xrange(d):
t = 0
for i in xrange(len(r)):
t += r[i]
if i >= n:
t -= r[i-n]
nr[i] = t
r, nr = nr, r
return r
def show_dist(n, d):
for i, k in enumerate(dice(n, d)):
if k: print i + d, k
show_dist(6, 3)
The time and space complexity are easy to see: there's nested loops with d and (n-1)*d iterations so the time complexity is O(n.d^2), and there's two arrays of size O(nd) and no other allocation, so the space complexity is O(nd).
Just in case, here a simple example in Python using the OpenTurns platform.
import openturns as ot
d = 2 # number of dice
n = 6 # number of sides per die
# possible values
dice_distribution = ot.UserDefined([[i] for i in range(1, n + 1)])
# create the sum distribution d times the sum
sum_distribution = sum([dice_distribution] * d)
That's it!
print(sum_distribution)
will show you all the possible values and their corresponding probabilities:
>>> UserDefined(
{x = [2], p = 0.0277778},
{x = [3], p = 0.0555556},
{x = [4], p = 0.0833333},
{x = [5], p = 0.111111},
{x = [6], p = 0.138889},
{x = [7], p = 0.166667},
{x = [8], p = 0.138889},
{x = [9], p = 0.111111},
{x = [10], p = 0.0833333},
{x = [11], p = 0.0555556},
{x = [12], p = 0.0277778}
)
You can also draw the probability distribution function:
sum_distribution.drawPDF()

apply window function to get a recursive list, how can I do?

I just come across a challenging problem (from programming competition practice) that contain recursive sequence as following
given 3 numbers m n k find element a[k] where
a[0] = m
a[1] = n
a[i] = a[i-1] + a[i-2] ; if floor(i/2) mod 2 = 1
a[i] = a[i-1] - a[i-4] ; if floor(i/2) mod 2 = 0
example case: for m=2 n=3 k=6 answer would be 9
a[0] = 2
a[1] = 3
a[2] = 3 + 2 = 5
a[3] = 5 + 3 = 8
a[4] = 8 - 2 = 6
a[5] = 6 - 3 = 3
a[6] = 3 + 6 = 9
...
this is how I generate the sequence (which obviously consume lots of stack and super slow even for the first 100 element)
1 fbm :: Int → Int → Int → Int
2 fbm m n 0 = m
3 fbm m n 1 = n
4 fbm m n x = let a = fbm m n (x-1)
5 b = fbm m n (x-2)
6 c = fbm m n (x-4)
7 in case (x `div` 2) `mod` 2 of
8 1 → a + b
9 0 → a - c
10
11 fbs m n = map (λx→fbm m n x) [0..]
Since the problem required to find element at big (~1000+) index. I try to do a different approach by trying to limit computation only on function with 4 inputs and apply the function with 4 element window recursively on the list but can't success implementing any of them (something mean I can't figured out how to do it)
fs1 = map fst $ iterate next (a,b)
where next (a,b) = something
fs2 = m:n:scanl (gen) 2 fs2
where gen [a,b,c,d] = something
fs3 = scanl (genx m n 0 0) (repeat 0)
where genx a b c d = something
Question 1: Does any of my approach the good way to solve this problem? (+ please show me an example of how to do it)
Question 2: How would you solve this kind of problem if I am in the wrong way?
This problem is similar to "Fibonacci series", but in my opinion, there is a big difference between them.
Memoization is a common technique to solve this kind of problems.
For example, we can use it to compute Fibonacci series.
The following is a very simple illustration. It is not as good as that zipWith solution, but it is still a linear operation implementation.
fib :: Int -> Integer
fib 0 = 1
fib 1 = 1
fib n = fibs !! (n-1) + fibs !! (n-2)
fibs :: [Integer]
fibs = map fib [0..]
If we try to imitate the above fib and fibs, perhaps we would write the following code.
fbm :: Int -> Int -> Int -> Int
fbm m n 0 = m
fbm m n 1 = n
fbm m n x = let a = fbs m n !! (x-1)
b = fbs m n !! (x-2)
c = fbs m n !! (x-4)
in case (x `div` 2) `mod` 2 of
1 -> a + b
0 -> a - c
fbs :: Int -> Int -> [Int]
fbs m n = map (fbm m n) [0..]
But the above fbs is also super slow. Replacing list by array makes little difference. The reason is simple, there is no memoization when we call fbs.
The answer will be more clear if we compare the type signatures of fibs and fbs.
fibs :: [Integer]
fbs :: Int -> Int -> [Int]
One of them is a list of intergers, while the other is a function.
To let memoization happen, we have to implement fbs in anothing way.
e.g.
fbs m n = let xs = map fbm [0..]
fbm 0 = m
fbm 1 = n
fbm x = let a = xs !! (x-1)
b = xs !! (x-2)
c = xs !! (x-4)
in case (x `div` 2) `mod` 2 of
1 -> a + b
0 -> a - c
in xs
Tail recursion is anothing common approach for this kind of problems.
fbm :: Int -> Int -> Int -> (Int, Int, Int, Int)
-- a[0] = m
-- a[1] = n
-- a[2] = m + n
-- a[3] = m + 2 * n
fbm m n 3 = (m+2*n, m+n, n, m)
fbm m n x = case (x `div` 2) `mod` 2 of
1 -> (a+b, a, b, c)
0 -> (a-d, a, b, c)
where (a,b,c,d) = fbm m n (x-1)
Last but not least, here is a mathematical solution.
a[0] = m
a[1] = n
a[2] = m + n
a[3] = m + 2n
a[4] = 2n
a[5] = n
a[6] = 3n
a[7] = 4n
a[8] = 2n
fbs m n = [m, n, m+n, m+2*n] ++ cycle [2*n, n, 3*n, 4*n]
I'd like to propose two solutions, which also based on the concept of memoisation introduced here by dbaupp. Unlike the existing answer, following solutions compute new elements of the list using indices instead of values of previous elements.
The first idea is following
fbs :: Int -> Int -> [Int]
fbs m n = m : n : map (fbMake m n) [2 ..]
fbMake :: Int -> Int -> Int -> Int
fbMake m n = f
where f i | (i `div` 2) `mod` 2 == 1 = (xs !! (i - 1)) + (xs !! (i - 2))
| otherwise = (xs !! (i - 1)) - (xs !! (i - 4))
xs = fbs m n
This solution builds elements of the fbs m n list from its memoised predecessors. Unfortunately, due to the fact that indexing of lists is O(n) it performs rather poorly.
What's better when it comes to indexing than lists? Arrays come into play. Here's the second solution.
import Data.Array
fbs :: Int -> Int -> Int -> [Int]
fbs m n k = m : n : map (fbm m n k) [2 .. k]
fbsArr :: Int -> Int -> Int -> Array Int Int
fbsArr m n k = listArray (0, k) (fbs m n k)
fbm :: Int -> Int -> Int -> Int -> Int
fbm m n k i | (i `div` 2) `mod` 2 == 1 = (xs ! (i - 1)) + (xs ! (i - 2))
| otherwise = (xs ! (i - 1)) - (xs ! (i - 4))
where xs = fbsArr m n k
It's nearly the same as the first one, but this time the results are memoised in an array and indexing its elements is significantly faster. According to my tests it generates answers for (m, n, k) = (2, 3, 1000) over 10 times faster than the list-based approach. The answer in this case is fbsArr m n k ! k.

C++ - Circular array with lower/upper bounds?

I want to create something similar to a double linked list (but with arrays) that works with lower/upper bounds.
A typical circular array would probably look like:
next = (current + 1) % count;
previous = (current - 1) % count;
But what's the mathematical arithmetic to incorporate lower/upper bounds properly into this ?
0 (lower bound item 1)
1
2 (upper bound item 1)
3 (lower bound item 2)
4 (upper bound item 2)
So that:
-> next on index 2 for item 1 returns 0
-> previous on index 0 for item 1 returns 2
-> next on index 4 for item 2 returns 3
-> previous on index 3 for item 2 returns 4
Thank you !
NOTE: Can't use external libraries.
In general mathematical terms:
next === current + 1 (mod count)
prev === current - 1 (mod count)
where === is the 'congruent' operator. Converting this to the modulus operator, it would be:
count = upper - lower
next = ((current + 1 - (lower%count) + count) % count) + lower
prev = ((current - 1 - (lower%count) + count) % count) + lower
It would be up to you to find out the upper & lower bounds for each item. You could store this in a binary tree for fast retrieval. Maybe I'm not understanding your question.
(note that this assumes lower < upper, and lower > 0)
+=======+ +=======+ +=======+
| Obj | ---> | Obj | ---> | Obj |
buffer | 1 | <--- | 2 | <--- | 3 |
+=======+ +=======+ +=======+
index 0 1 2 /* our first run */
index 3 4 5 /* second run */
and so on ...
So, you see for a 3 member list, the 1st item is indexed by 0, 3, 6, etc. Similarly, the second item is indexed by 1, 4 (1 + 3), 7 (4 + 3), ...
The general rule is: next <- (next + 1) % size, where size = upper - lower + 1
Using this formula we get:
curr | next
-------+-----------------
0 | (0 + 1) % 3 = 1
-------+-----------------
1 | (1 + 1) % 3 = 2
-------+-----------------
2 | (2 + 1) % 3 = 0
-------+-----------------
Hope that helps
I wrote this article a few years back about a circular STL iterator.
http://noveltheory.com/Iterators/Iterator_N0.htm
It will work on any STL collection (vectors & boost:array, etc)
Boost has a Circular container that I believe you can set bounds on as well.
In fact the Example on that page looks very similar to what you are saying here.
But anyway, you could accomplish the math portion of it easily using a modulus:
So say your max was 3:
int MAX = 3;
someArray[ 0 % MAX ]; // This would return element 0
someArray[ 1 % MAX ]; // This would return element 1
someArray[ 3 % MAX ]; // This would return element 0
someArray[ 4 % MAX ]; // This would return element 1