Tiles Stacking Problem to build a stable stack - c++

I am studying Dynamic Programming on GeeksForGeeks and have a problem with Tiles Stacking Problem and the way it is solved
A stable tower of height n is a tower consisting of exactly n tiles of unit height stacked vertically in such a way, that no bigger tile is placed on a smaller tile. An example is shown below :
We have infinite number of tiles of sizes 1, 2, …, m. The task is calculate the number of different stable tower of height n that can be built from these tiles, with a restriction that you can use at most k tiles of each size in the tower.
Note: Two tower of height n are different if and only if there exists a height h (1 <= h <= n), such that the towers have tiles of different sizes at height h.
For example:
Input : n = 3, m = 3, k = 1.
Output : 1
Possible sequences: { 1, 2, 3}.
Hence answer is 1.
Input : n = 3, m = 3, k = 2.
Output : 7
{1, 1, 2}, {1, 1, 3}, {1, 2, 2},
{1, 2, 3}, {1, 3, 3}, {2, 2, 3},
{2, 3, 3}.
The way to solve is to count number of decreasing sequences of length n using numbers from 1 to m where every number can be used at most k times. We can recursively compute count for n using count for n-1.
Declare a 2D array dp[][], where each state dp[i][j] denotes the number of decreasing sequences of length i using numbers from j to m. We need to take care of the fact that a number can be used a most k times. This can be done by considering 1 to k occurrences of a number. Hence our recurrence relation becomes:
Also, we can use the fact that for a fixed j we are using the consecutive values of previous k values of i. Hence, we can maintain a prefix sum array for each state. Now we have got rid of the k factor for each state.
I have read this algorithm for many times but I don't understand it and how to prove the accuracy of it. I have tried to find the guide on the internet but only its variations. Please help me to explain it.

Observe that the largest size tile (m) can appear only at the bottom.
Its appearances are consecutive
Your recurrence becomes:
T(n,m,k) = SIGMA_{i=0,...,k} T(n-i,m-1,k)
Then you have to define the base cases of the recurrence:
T(n,m,1) = // can you tell what this is?
T(n,1,k) = // can you tell what this is?
T(1,m,k) = m // this is easy

We can prove it by forming a logical recurrence:
(A) If the maximum stack height, given m and k is smaller than n, we cannot create any stack.
(B) If only one tile is allowed, we can choose m different sizes for that tile.
(C) If only one size is allowed, if k is greater than or equal to n, we can construct one stack of n tiles of size 1; otherwise, zero stacks.
(D) For each possible count, x, of tiles of size m stacked, we have one way that is multiplied by the number of ways to stack (n - x) tiles, using sizes of at most (m - 1) since we used m.
To convert the recurrence to bottom-up dynamic programming, we initialise the matrix using the base cases of the recurrence, and fill in subsequent entries using its general-case logical branch.
Here's a demonstration of the recurrence in JavaScript (sorry I'm not versed in C++ but the first function, f, which calculates just the count, should be very easy to convert):
// Returns the count
function f(n, m, k){
if (n > m * k)
return 0;
if (n == 1)
return m;
if (m == 1)
return n <= k ? 1 : 0;
let result = 0;
for (let x=0; x<=k; x++)
result += f(n - x, m - 1, k);
return result;
}
// Returns the sequences
function g(n, m, k){
if (n > m * k)
return [];
if (n == 1)
return new Array(m).fill(0).map((_, i) => [i + 1]);
if (m == 1)
return n <= k ? [new Array(n).fill(1)] : [];
let result = [];
for (let x=0; x<=k; x++){
const pfx = new Array(x).fill(m);
const prev = g(n - x, m - 1, k);
for (let s of prev)
result.push(pfx.concat(s));
}
return result;
}
var inputs = [
[3, 3, 1],
[3, 3, 2],
[1, 2, 2]
];
for (let args of inputs){
console.log('' + args);
console.log(f(...args));
console.log(JSON.stringify(g(...args)));
console.log('');
}

Related

Zero Subsequences problem - What's wrong with my C++ solution?

Problem Statement:
Given an array arr of n integers, count the number of non-empty subsequences of the given array such that their product of maximum element and minimum element is zero. Since this number can be huge, compute it modulo 10 ^ 9 + 7
A subsequence of an array is defined as the sequence obtained by deleting several elements from the array (possible none) without changing the order of the remaining elements.
Example
Given n = 3, arr = [1, 0, – 2].
There are 7 subsequences of arr that are-
[1], minimum = 1, maximum =1 , min * max = 1 .
[1,0] minimum = 0, maximum=1, min * max=0
[ 1,0, – 2], minimum = – 2, maximum =1, min* max = -2.
[0], minimum = 0, maximum =0, min * max=0
[0,-2],minimum=-2,maximum=0, min* max=0,
[1, -2] minimum=-2, maximum=1,min* max=-2
[- 2] minimum =-2 maximum = – 2 , min* max = 4.
There are 3 subsequences whose minimum * maximum = 0 that are
[1, 0], [0], [0, – 2] . Hence the answer is 3.
I tried to come up with a solution, by counting the number of zeroes, positive numbers and negative numbers and then adding possible subsequences(2^n, per count) to an empty variable.
My answer is way off though, it's 10 when the expected answer is 3. Can someone please point out my mistake?
#include<bits/stdc++.h>
using namespace std;
#define int long long
int zeroSubs(vector<int> arr){
int x = 0, y = 0, z = 0, ans = 0;
for(int i = 0; i < arr.size(); i++){
if(arr[i] == 0) z++;
else if(arr[i] < 0) x++;
else y++;
}
ans += ((int)pow(2, z))*((int)pow(2, x));
ans += ((int)pow(2, y))*((int)pow(2, z));
ans += ((int)pow(2, z));
return ans;
}
int32_t main()
{
//directly passed the sample test case as an array
cout<<zeroSubs({1, 0, -2});
return 0;
}
ans += ((1<<z)-1)*((1<<x)-1);
ans += ((1<<y)-1)*((1<<z)-1);
ans += ((1<<z)-1);
Made this slight change in the logic, thanks a lot to everyone for the valuable feedback. It works now.

Knapsack using dynamic programming

There is a common algorithm for solving the knapsack problem using dynamic programming. But it's not work for W=750000000, because there is an error of bad alloc. Any ideas how to solve this problem for my value of W?
int n=this->items.size();
std::vector<std::vector<uint64_t>> dps(this->W + 1, std::vector<uint64_t>(n + 1, 0));
for (int j = 1; j <= n; j++)
for (int k = 1; k <= this->W; k++) {
if (this->items[j - 1]->wts <= k)
dps[k][j] = std::max(dps[k][j - 1], dps[k - this->items[j - 1]->wts][j - 1] + this->items[j - 1]->cost);
else
dps[k][j] = dps[k][j - 1];
}
First of all, you can use only one dimension to solve the knapsack problem. This will reduce your memory from dp[W][n] (n*W space) to dp[W] (W space). You can look here: 0/1 Knapsack Dynamic Programming Optimazion, from 2D matrix to 1D matrix
But, even if you use only dp[W], your W is really high, and might be too much memory. If your items are big, you can use some approach to reduce the number of possible weights. First, realize that you don't need all positions of W, only those such that the sum of weight[i] exists.
For example:
W = 500
weights = [100, 200, 400]
You will never use position dp[473] of your matrix, because the items can occupy only positions p = [0, 100, 200, 300, 400, 500]. It is easy to see that this problem is the same as when:
W = 5
weights = [1,2,4]
Another more complicated example:
W = 20
weights = [5, 7, 8]
Using the same approach as before, you don't need all weights from 0 to 20, because the items can occupy only fill up to positions
p = [0, 5, 7, 5 + 7, 5 + 8, 7 + 8, 5 + 7 + 8]
p = [0, 5, 7, 12, 13, 15, 20]
, and you can reduce your matrix from dp[20] to dp[size of p] = M[7].
You do not show n, but even if we assume it is 1, lets see how much data you are trying to allocate. So, it would be:
W*64*2 // Here we don't consider overhead of the vector
This comes out to be:
750000000*64*2 bits = ~11.1758Gb
I am guessing this is more space then your program will allow. You are going to need to take a new approach. Perhaps try to handle the problem as multiple blocks. Consider the first and second half seperatley, then swap.

Query points on the vertices of a Hamming cube

I have N points that lie only on the vertices of a cube, of dimension D, where D is something like 3.
A vertex may not contain any point. So every point has coordinates in {0, 1}D. I am only interested in query time, as long as the memory cost is reasonable ( not exponential in N for example :) ).
Given a query that lies on one of the cube's vertices and an input parameter r, find all the vertices (thus points) that have hamming distance <= r with the query.
What's the way to go in a c++ environment?
I am thinking of a kd-tree, but I am not sure and want help, any input, even approximative, would be appreciated! Since hamming distance comes into play, bitwise manipulations should help (e.g. XOR).
There is a nice bithack to go from one bitmask with k bits set to the lexicographically next permutation, which means it's fairly simple to loop through all masks with k bits set. XORing these masks with an initial value gives all the values at hamming distance exactly k away from it.
So for D dimensions, where D is less than 32 (otherwise change the types),
uint32_t limit = (1u << D) - 1;
for (int k = 1; k <= r; k++) {
uint32_t diff = (1u << k) - 1;
while (diff <= limit) {
// v is the input vertex
uint32_t vertex = v ^ diff;
// use it
diff = nextBitPermutation(diff);
}
}
Where nextBitPermutation may be implemented in C++ as something like (if you have __builtin_ctz)
uint32_t nextBitPermutation(uint32_t v) {
// see https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
uint32_t t = v | (v - 1);
return (t + 1) | (((~t & -~t) - 1) >> (__builtin_ctz(v) + 1));
}
Or for MSVC (not tested)
uint32_t nextBitPermutation(uint32_t v) {
// see https://graphics.stanford.edu/~seander/bithacks.html#NextBitPermutation
uint32_t t = v | (v - 1);
unsigned long tzc;
_BitScanForward(&tzc, v); // v != 0 so the return value doesn't matter
return (t + 1) | (((~t & -~t) - 1) >> (tzc + 1));
}
If D is really low, 4 or lower, the old popcnt-with-pshufb works really well and generally everything just lines up well, like this:
uint16_t query(int vertex, int r, int8_t* validmask)
{
// validmask should be array of 16 int8_t's,
// 0 for a vertex that doesn't exist, -1 if it does
__m128i valid = _mm_loadu_si128((__m128i*)validmask);
__m128i t0 = _mm_set1_epi8(vertex);
__m128i r0 = _mm_set1_epi8(r + 1);
__m128i all = _mm_setr_epi8(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15);
__m128i popcnt_lut = _mm_setr_epi8(0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4);
__m128i dist = _mm_shuffle_epi8(popcnt_lut, _mm_xor_si128(t0, all));
__m128i close_enough = _mm_cmpgt_epi8(r0, dist);
__m128i result = _mm_and_si128(close_enough, valid);
return _mm_movemask_epi8(result);
}
This should be fairly fast; fast compared to the bithack above (nextBitPermutation, which is fairly heavy, is used a lot there) and also compared to looping over all vertices and testing whether they are in range (even with builtin popcnt, that automatically takes at least 16 cycles and the above shouldn't, assuming everything is cached or even permanently in a register). The downside is the result is annoying to work with, since it's a mask of which vertices both exist and are in range of the queried point, not a list of them. It would combine well with doing some processing on data associated with the points though.
This also scales down to D=3 of course, just make none of the points >= 8 valid. D>4 can be done similarly but it takes more code then, and since this is really a brute force solution that is only fast due to parallelism it fundamentally gets slower exponentially in D.

How to multiply 2 polynomial use Linked List with out power variable?

My teacher let me use linked list to make a polynomial like as this code
class Node {
public:
int data; // only data, has no power
Node* next;
};
class PolyList {
private:
Node* pHead;
public:
PolyList();
~PolyList();
............
}
List is read by file input.txt.
Example : 2 4 0 3 ---> Polynomial = ( 2x^3 + 4x^2 + 3 )
How can i implement method multiply 2 polynomial between list1 and list2.
I search google and on this site but only found polynomial is created with data include coefficient variable and power variable. My polynomial have only coefficient and i can not change this structure.
I need help from everybody. Thanks a lot.
What you want to do is convolution of the coefficients. If you have access to Matlab (or Octave), you can try it out:
% Note this is Matlab, just for demonstration
p1 = [1 1]; % x + 1
p2 = [1 0]; % x
p3 = conv(p1, p2) %x*(x + 1) => x^2 + x
% gives p3 = [1 1 0], i.e., x^2 + x
Edit: I didn't give any details about implementing this - You can probably find examples of convolution using linked lists by googling it.
You could use two nested for-loops to multiply the two lists together, while saving them in another list: (pseudocode)
define list3 as a new PolyList of length x + y
for each element A at index x in list1
for each element B at index y in list2
save list3 element at index x + y as (A * B + (element at index x + y))
So for example with x^3 - 2x + 1 * x^2 + 4 = [1, 0, -2, 1] * [0, 0, 1, 4] = [0, 0, 0, 1, 0, 2, 1, -8, 4].
*Note: Your resulting list can be up to twice the size of the original length of the two arrays, because for example x^3 * x^3 = x^6, which would be recorded in the 6th index, starting from the right.*
Also note: The two arrays must be the same length for the algorithm to work properly! If this isn't assumed by the function you're creating, you will have to handle this situation.
A good way to figure out how to program a problem like this is to imagine exactly the steps you would do to solve the problem, write those down, and then translate that into the language you're using.
A way to do it can be the following:
list<int> multiply(list<int> l1, list<int> l2) {
int m[l1.size() + l2.size() - 1]; // Only positive powers of l1 "augments" the powers of l2!
for (unsigned int i = 0; i < l1.size(); i++) {
for (unsigned int j = 0; j < l2.size(); i++) {
m[i + j] += l1.get() * l2.get();
l2.next();
}
l2.reset();
l1.next();
}
list<int> to_ret;
for (unsigned int i = 0; i < m.length; i++)
to_ret.push_back(m[i]);
return to_ret;
}
You put the coefficient of the i-th power of the resulting polynomial in m[i].
To fill m, it's sufficient to iterate over every couple (i, j) of [0, l1.size) x [0, l2.size) and put in m[i + j] the coefficient you get multiplying the coefficient of the i-th power of the first polynomial with that of the j-th power of the second one.
If you want to multiply two polynomials of order M and N then the resulting polynomial will be of order M + N. So you need to create an output linked list whose length is the sum of the lengths of the two input lists. You then just iterate through the two input lists multiplying and summing the terms into the output list.
Hint: you might want to try doing this by hand first, i.e. with pencil and paper, so that you understand the process before you try to code it.

Coin flipping game: Optimization problem

There is a rectangular grid of coins, with heads being represented by the value 1 and tails being represented by the value 0. You represent this using a 2D integer array table (between 1 to 10 rows/columns, inclusive).
In each move, you choose any single cell (R, C) in the grid (R-th row, C-th column) and flip the coins in all cells (r, c), where r is between 0 and R, inclusive, and c is between 0 and C, inclusive. Flipping a coin means inverting the value of a cell from zero to one or one to zero.
Return the minimum number of moves required to change all the cells in the grid to tails. This will always be possible.
Examples:
1111
1111
returns: 1
01
01
returns: 2
010101011010000101010101
returns: 20
000
000
001
011
returns: 6
This is what i tried:
Since the order of flipping doesn't matter, and making a move on a coin twice is like not making a move at all, we can just find all distinct combinations of flipping coins, and minimizing the size of good combinations(good meaning those that give all tails).
This can be done by making a set consisting of all coins, each represented by an index.(i.e. if there were 20 coins in all, this set would contain 20 elements, giving them an index 1 to 20). Then make all possible subsets and see which of them give the answer(i.e. if making a move on the coins in the subset gives us all tails). Finally, minimize size of the good combinations.
I don't know if I've been able to express myself too clearly... I'll post a code if you want.
Anyway, this method is too time consuming and wasteful, and not possible for no.of coins>20(in my code).
How to go about this?
I think a greedy algorithm suffices, with one step per coin.
Every move flips a rectangular subset of the board. Some coins are included in more subsets than others: the coin at (0,0) upper-left is in every subset, and the coin at lower-right is in only one subset, namely the one which includes every coin.
So, choosing the first move is obvious: flip every coin if the lower-right corner must be flipped. Eliminate that possible move.
Now, the lower-right coin's immediate neighbors, left and above, can only potentially be flipped by a single remaining move. So, if that move must be performed, do it. The order of evaluation of the neighbors doesn't matter, since they aren't really alternatives to each other. However, a raster pattern should suffice.
Repeat until finished.
Here is a C++ program:
#include <iostream>
#include <valarray>
#include <cstdlib>
#include <ctime>
using namespace std;
void print_board( valarray<bool> const &board, size_t cols ) {
for ( size_t i = 0; i < board.size(); ++ i ) {
cout << board[i] << " ";
if ( i % cols == cols-1 ) cout << endl;
}
cout << endl;
}
int main() {
srand( time(NULL) );
int const rows = 5, cols = 5;
valarray<bool> board( false, rows * cols );
for ( size_t i = 0; i < board.size(); ++ i ) board[i] = rand() % 2;
print_board( board, cols );
int taken_moves = 0;
for ( size_t i = board.size(); i > 0; ) {
if ( ! board[ -- i ] ) continue;
size_t sizes[] = { i%cols +1, i/cols +1 }, strides[] = { 1, cols };
gslice cur_move( 0, valarray<size_t>( sizes, 2 ),
valarray<size_t>( strides, 2 ) );
board[ cur_move ] ^= valarray<bool>( true, sizes[0] * sizes[1] );
cout << sizes[1] << ", " << sizes[0] << endl;
print_board( board, cols );
++ taken_moves;
}
cout << taken_moves << endl;
}
Not c++. Agree with #Potatoswatter that the optimal solutition is greedy, but I wondered if a Linear Diophantine System also works. This Mathematica function does it:
f[ei_] := (
xdim = Dimensions[ei][[1]];
ydim = Dimensions[ei][[2]];
(* Construct XOR matrixes. These are the base elements representing the
possible moves *)
For[i = 1, i < xdim + 1, i++,
For[j = 1, j < ydim + 1, j++,
b[i, j] = Table[If[k <= i && l <= j, -1, 0], {k, 1, xdim}, {l, 1, ydim}]
]
];
(*Construct Expected result matrix*)
Table[rv[i, j] = -1, {i, 1, xdim}, {j, 1, ydim}];
(*Construct Initial State matrix*)
Table[eiv[i, j] = ei[[i, j]], {i, 1, xdim}, {j, 1, ydim}];
(*Now Solve*)
repl = FindInstance[
Flatten[Table[(Sum[a[i, j] b[i, j], {i, 1, xdim}, {j, 1, ydim}][[i]][[j]])
eiv[i, j] == rv[i, j], {i, 1, xdim}, {j, 1, ydim}]],
Flatten[Table[a[i, j], {i, 1, xdim}, {j, 1, ydim}]]][[1]];
Table[c[i, j] = a[i, j] /. repl, {i, 1, xdim}, {j, 1, ydim}];
Print["Result ",xdim ydim-Count[Table[c[i, j], {i, 1, xdim}, {j, 1,ydim}], 0, ydim xdim]];)
When called with your examples (-1 instead of 0)
ei = ({
{1, 1, 1, 1},
{1, 1, 1, 1}
});
f[ei];
ei = ({
{-1, 1},
{-1, 1}
});
f[ei];
ei = {{-1, 1, -1, 1, -1, 1, -1, 1, 1, -1, 1, -1, -1, -1, -1, 1, -1,
1, -1, 1, -1, 1, -1, 1}};
f[ei];
ei = ({
{-1, -1, -1},
{-1, -1, -1},
{-1, -1, 1},
{-1, 1, 1}
});
f[ei];
The result is
Result :1
Result :2
Result :20
Result :6
Or :)
Solves a 20x20 random problem in 90 seconds on my poor man's laptop.
Basically, you're taking the N+M-1 coins in the right and bottom borders and solving them, then just calling the algorithm recursively on everything else. This is basically what Potatoswatter is saying to do. Below is a very simple recursive algorithm for it.
Solver(Grid[N][M])
if Grid[N-1][M-1] == Heads
Flip(Grid,N-1,M-1)
for each element i from N-2 to 0 inclusive //This is empty if N is 1
If Grid[i][M-1] == Heads
Flip(Grid,i,M-1)
for each element i from M-2 to 0 inclusive //This is empty if M is 1
If Grid[N-1][i] == Heads
Flip(Grid,N-1,i)
if N>1 and M > 1:
Solver(Grid.ShallowCopy(N-1, M-1))
return;
Note: It probably makes sense to implement Grid.ShallowCopy by just having Solver have arguments for the width and the height of the Grid. I only called it Grid.ShallowCopy to indicate that you should not be passing in a copy of the grid, though C++ won't do that with arrays by default anyhow.
An easy criterion for rectangle(x,y) to be flipped seems to be: exactly when the number of ones in the 2x2 square with top-left square (x,y) is odd.
(code in Python)
def flipgame(grid):
w, h = len(grid[0]), len(grid)
sol = [[0]*w for y in range(h)]
for y in range(h-1):
for x in range(w-1):
sol[y][x] = grid[y][x] ^ grid[y][x+1] ^ grid[y+1][x] ^ grid[y+1][x+1]
for y in range(h-1):
sol[y][w-1] = grid[y][w-1] ^ grid[y+1][w-1]
for x in range(w-1):
sol[h-1][x] = grid[h-1][x] ^ grid[h-1][x+1]
sol[h-1][w-1] = grid[h-1][w-1]
return sol
The 2D array returned has a 1 in position (x,y) if rectangle(x,y) should be flipped, so the number of ones in it is the answer to your original question.
EDIT: To see why it works:
If we do moves (x,y), (x,y-1), (x-1,y), (x-1,y-1), only square (x,y) is inverted. This leads to the code above. The solution must be optimal, as there are 2^(hw) possible configurations of the board and 2^(hw) possible ways to transform the board (assuming every move can be done 0 or 1 times). In other words, there is only one solution, hence the above produces the optimal one.
You could use recursive trials.
You would need at least the move count and to pass a copy of the vector. You'd also want to set a maximum move cutoff to set a limit to the breadth of branches coming out of at each node of the search tree. Note this is a "brute force" approach."
Your general algorithm structure would be:
const int MAX_FLIPS=10;
const unsigned int TREE_BREADTH=10;
int run_recursion(std::vector<std::vector<bool>> my_grid, int current flips)
{
bool found = true;
int temp_val = -1;
int result = -1;
//Search for solution with for loops; if true is found in grid, found=false;
...
if ( ! found && flips < MAX_FLIPS )
{
//flip coin.
for ( unsigned int more_flips=0; more_flips < TREE_BREADTH; more_flips++ )
{
//flip one coin
...
//run recursion
temp_val=run_recursion(my_grid,flips+1)
if ( (result == -1 && temp_val != -1) ||
(temp_val != -1 && temp_val < result) )
result = temp_val;
}
}
return result;
}
...sorry in advance for any typos/minor syntax errors. Wanted to prototype a fast solution for you, not write the full code...
Or easier still, you could just use a brute force of linear trials. Use an outer for loop would be number of trials, inner for loop would be flips in trial. On each loop you'd flip and check if you'd succeeded, recycling your success and flip code from above. Success would short circuit the inner loop. At the end of the inner loop, store the result in the array. If failure after max_moves, store -1. Search for the max value.
A more elegant solution would be to use a multithreading library to start a bunch of threads flipping, and have one thread signal to others when it finds a match, and if the match is lower than the # of steps run thus far in another thread, that thread exits with failure.
I suggest MPI, but CUDA might win you brownie points as it's hot right now.
Hope that helps, good luck!