Hello I'm having trouble with a little program I am trying to write. The problem is if I'm given any matrix size (lets just say a 4x4 for this example), find the largest product of n numbers in a row (lets say n = 3). The 3 numbers in a row can be horizontal, vertical, or diagonal. So heres a matrix:
1 1 2 5
1 5 2 4
1 7 2 3
1 8 2 1
If n was equal to 3 then my largest product would be 280 (5*7*8). Now I have my matrix loaded into a 2D vector. I'm not too picky on how the program works(brute force is fine), so far I know I'm going to have to have at least two nested for loops to go through each staring location of the matrix but I haven't been successful in finding the current answer. Any advice will help, thank you.
Version to find max product in rows using rolling multiplication to save some resources. This rolling procedure means that we don't have to multiply n values to find each product of these n values, but instead we have to just do one multiplication and one division:
if (currN == N) { // compute full product first time
while (currn) {
product *= (*it3++);
--currn;
}
} else { // rolling computation
product *= (*(it3 + n - 1)) / (*(it3 - 1));
it3 += n;
}
It is up to you to complete this to handle also columns:
populate matrix:
#include <cstdio>
#include <vector>
#include <algorithm>
#include <iterator>
#include <iostream>
using namespace std;
typedef vector< vector< int> > Matrix;
typedef Matrix::iterator outIt;
typedef vector< int>::iterator inIt;
void fillMatrix( Matrix& matrix) {
outIt it = matrix.begin();
(*it).push_back( 1);
(*it).push_back( 1);
(*it).push_back( 2);
(*it).push_back( 5);
++it;
(*it).push_back( 1);
(*it).push_back( 5);
(*it).push_back( 2);
(*it).push_back( 4);
++it;
(*it).push_back( 1);
(*it).push_back( 7);
(*it).push_back( 2);
(*it).push_back( 3);
++it;
(*it).push_back( 1);
(*it).push_back( 8);
(*it).push_back( 2);
(*it).push_back( 1);
}
print matrix and find max product in rows:
void printMatrix( Matrix& matrix) {
outIt it = matrix.begin();
while ( it != matrix.end()) {
inIt it2 = (*it).begin();
while ( it2 != (*it).end()) {
printf( "%d", *it2);
++it2;
}
printf( "\n");
++it;
}
}
/**
*
* #param matrix
* Largest product in row using rolling multiplication
* #param n number of factors
* #param v factors of largest product
* #return largest product
*/
int largestProductInRow( Matrix& matrix, int n, vector< int>& v) {
if ( n > matrix.size()) return -1;
int res = 0;
int N = matrix.size() - n + 1; // number of products in row (or column)
/* search in rows */
outIt it = matrix.begin();
while (it != matrix.end()) {
inIt it2 = (*it).begin();
int currN = N;
int product = 1;
while (currN) { // rolling product calculation
inIt it3 = it2;
int currn = n;
if (currN == N) { // compute full product first time
while (currn) {
product *= (*it3++);
--currn;
}
} else { // rolling computation
product *= (*(it3 + n - 1)) / (*(it3 - 1));
it3 += n;
}
if (product > res) {
res = product;
copy(it3 - n, it3, v.begin());
}
--currN;
++it2;
}
++it;
}
return res;
}
usage:
/*
*
*/
int main(int argc, char** argv) {
Matrix matrix( 4, vector< int>());
fillMatrix( matrix);
printMatrix( matrix);
vector< int> v(3);
int res = largestProductInRow( matrix, 3, v);
printf( "res:%d\n", res);
copy( v.begin(), v.end(), ostream_iterator<int>(cout, ","));
return 0;
}
result:
res:42
7,2,3,
RUN SUCCESSFUL (total time: 113ms)
Lets say we have s x t matrix (s columns and t rows).
int res = 0;
if(s >= n)
{
for (int r = 0; r < t; ++r) // for each row
{
for (int i = 0; i <= s-n; ++i) //moving through the row
{
int mul = m[i][0];
for (int j = 1; j < n; ++j) //calculating product in a row
{
mul*=m[i][j];
}
if(mul > res)
{
res = mul;
//save i, j here if needed
}
}
}
}
if(t >= n)
{
for (int c = 0; c < s; ++c) // for each column
{
for (int i = 0; i <= t-n; ++i) //moving through the column
{
int mul = m[0][i];
for (int j = 1; j < n; ++j) //calculating product in a column
{
mul*=m[j][i];
}
if(mul > res)
{
res = mul;
//save i, j here if needed
}
}
}
}
If you insist on brute-force, then as you said, you need to iterate over all [x,y],
which will be the starting points of the rows.
From these you can iterate over k adjacent elements in all directions.
You can store the directions as vectors in an array.
This would run in O(k n^2).
For n x n matrix and looking for k elements in row, C-like pseudocode would look like this (note there is no bounds checking, for the sake of simplicity):
// define an array of directions as [x,y] unit vectors
// you only need to check in 4 directions, other 4 are the same, just reversed
int[4][2] dirs = {{1,0}, {1,1}, {0,1}, {-1,1}};
// iterate over all starting positions
for (x = 0; x < n; ++x) {
for (y = 0; y < n; ++y) {
// iterate over all directions
for (d = 0; d < 4; ++d) {
result = 1;
// iterate over elements in row starting at [x,y]
// going in direction dirs[d]
for (i = 0; i < k; ++i) {
// multiply current result by the element,
// which is i places far from the beginning [x,y]
// in the direction pointed by dirs[d]
new_x = x + i * dirs[d][0];
new_y = y + i * dirs[d][1];
// you need to check the bounds, i'm not writing it here
// if new_x or new_y are outside of the matrix
// then continue with next direction
result *= matrix[new_x][new_y];
}
if (result > max) {
max = result;
}
}
}
}
Slightly better, less of a brute-force way would be to
start on the boundary of a matrix, pick a direction and go in this direction to the opposite side of the matrix, keeping the product of the last k numbers on the way.
While walking, you keep the product, multiplying it by the number you got to and dividing by the number you left k steps ago.
This way, with some bounds checking of course,
the product is always product of the last k numbers,
therefore if the current product is more than maximum, just let max = product.
This runs always in O(n^2).
Related
I need to find an efficient way to find the sum of values of all simple paths in a weighted tree. The value of a simple path is defined as the sum of weights of all edges in the given simple path.
This is my try, but it is not working. Please tell the correct approach.
#include <iostream>
#include <vector>
using namespace std;
typedef long long ll;
typedef pair<int, ll> pil;
const int MAXN = 1e5;
int n, color[MAXN + 2];
vector<pil> adj[MAXN + 2];
ll sum1, cnt1[MAXN + 2], cnt[MAXN + 2], res;
void visit(int u, int p)
{
cnt[u] = 1;
cnt1[u] = color[u];
for (int i = 0; i < (int) adj[u].size(); ++i)
{
int v = adj[u][i].first;
ll w = adj[u][i].second;
if (v == p)
continue;
visit(v, u);
ll tmp = cnt1[v] * (n - sum1 - cnt[v] + cnt1[v]);
tmp += (cnt[v] - cnt1[v]) * (sum1 - cnt1[v]);
res += tmp * w;
cnt[u] += cnt[v];
cnt1[u] += cnt1[v];
}
}
int main()
{
scanf("%d", &n);
for (int i = 1; i <= n; ++i)
{
scanf("%d", color + i);
sum1 += color[i];
}
for (int i = 1, u, v; i < n; ++i)
{
scanf("%d %d %lld", &u, &v, &res);
adj[u].push_back(pil(v, res));
adj[v].push_back(pil(u, res));
}
res = 0;
visit(1, -1);
printf("%lld\n", res);
return 0;
}
Below is Simple implementation of what #Arjun Singh explained.
int64_t ans = 0;
int dfs(int node, int parent) {
int cur_subtree_size = 1;
for(int child : adj[node]) {
if(parent != child) {
int child_subtree_size = dfs(child, node);
int64_t contribution_of_cur_edge = child_subtree_size * (N - child_subtree_size) * weight[{node, child}];
ans += contribution_of_cur_edge;
cur_subtree_size += child_subtree_size;
}
}
return cur_subtree_size;
}
You can calculate the contribution of each edge in the final answer. Let's say an edge connects two components component1 ---- component2. Then the contribution of this edge in final answer will be -
Vertices(component1) * Vertices(component2)*edge_weight.
The no. of vertices in each component can be found easily running a dfs and calculating the number of vertices in the subtree of each vertex. Let an edge connect vertex u and v. u is the parent of v. Then,
Vertices(v) = Number of vertices in the subtree of v = Vertices(component1)
Vertices(component2) = n - Vertices(v)
You can precalculate this subtree array. So final time complexity will be O(n).
I programmed a piece of code where I have a huge 3D matrix in C++ using boost::multi_array.
For every matrix element I want to sum up the neighborhood in a certain distance dist. Every element is weighted according to its distance to the center. The center element should not be included in the sum.
The distance dist is given by the user and can vary. My program is doing the right calculations but is slow when the matrix gets big. I have sometimes matrices with more than 100000 elements...
So my question is: Is there any way to make this computation faster? Maybe also by using another library?
The part consists mainly of two functions. In the first function I access every matrix element and calculate the sum of the neighborhood. The inputMatrix is a 3D boost multi array:
boost::multi_array<float, 3> inputMatrix = imageMatrix;
T actualElement;
int posActualElement;
for (int depth = 0; depth<inputMatrix.shape()[2]; depth++) {
for (int row = 0; row<inputMatrix.shape()[0]; row++) {
for (int col = 0; col<inputMatrix.shape()[1]; col++) {
indexOfElement[0] = row;
indexOfElement[1] = col;
indexOfElement[2] = depth;
//get actual Element if it is the centre of a whole neighborhood
actualElement = inputMatrix[row][col][depth];
if (!std::isnan(actualElement)) {
//get the sum of the actual neighborhood
sumOfActualNeighborhood = getNeighborhood3D(inputMatrix, indexOfElement);
}
}
}
}
The function neighborhood3D looks as follows:
template <class T, size_t R>
T NGTDMFeatures3D<T, R>::getNeighborhood3D(boost::multi_array<T, R> inputMatrix, int *indexOfElement) {
std::vector<T> neighborhood;
T actElement;
float weight;
for (int k = -dist; k<dist + 1; k++) {
for (int i = -dist; i<dist + 1; i++) {
for (int j = -dist; j<dist + 1; j++) {
if (i != 0 || j != 0 || k != 0) {
if (indexOfElement[0] + i>-1 && indexOfElement[0] + i<inputMatrix.shape()[0] && indexOfElement[1] + j>-1 && indexOfElement[1] + j<inputMatrix.shape()[1] && indexOfElement[2] + k>-1 && indexOfElement[2] + k<inputMatrix.shape()[2]) {
actElement = inputMatrix[indexOfElement[0] + i][indexOfElement[1] + j][indexOfElement[2] + k];
if (!std::isnan(actElement)) {
weight = calculateWeight3D(i, j, k, normNGTDM, actualSpacing);
neighborhood.push_back(weight*actElement);
}
}
}
}
}
}
T sum = accumulate(neighborhood.begin(), neighborhood.end(), 0);
sum = sum / neighborhood.size();
return sum;
}
I have a shortest path problem:
Given a graph with n vertices, find the shortest path(as in number of edges taken, not edge weight) from vertex 1 to vertex n, if there are multiple of those paths, take the one with the least lexicographical order of its edge weights.
The Input consists of n and m, n vertices and m edges (2 <= n <= 100.000; 1 <= m <= 200.000) followed by the m edges.
Between 2 vertices can be multiple edges. It's my first time working with C++, coming from Java.
My current input/main looks like this:
ios::sync_with_stdio(false);
int n, m, v1, v2, w;
vector<string> outvec;
while (infile >> n >> m) {
int k = 0;
for (int i = 0; i < m; i++) {
infile >> v1 >> v2 >> w;
//TODO Only add smallest edge between 2 vertices
if (v1 != v2) {
adj[v1].push_back(make_pair(v2, w));
adj[v2].push_back(make_pair(v1, w));
}
}
dijkstra(n + 1);
string outs;
while (n != 1) {
outs.insert(0, to_string(col[n]) + " ");
n = previ[n];
k++;
}
outs = outs.substr(0, outs.length() - 1);
outvec.push_back(to_string(k).append("\n").append(outs).append("\n"));
for (auto& v : adj) {
v.clear();
}
}
Where adj represents the adjacency list, an array of vector<pair<int,int>> to be exact.
After all that I'm using Dijkstra's Algorithm for the shortest path with the 2 metrics mentioned above. But it's still too slow for the needed cases.
My idea was, to reduce the maximum number of edges between two vertices to one, with the minimum weight of all edges before, so that Dijkstra won't need to traverse all edges between two vertices.
Is there an efficient way to achieve my goal? And is Dijkstra the way to go here?
So my problem is my performance in runtime, so here is my current implementation on dijkstra as well:
void dijkstra(int m) {
for (int i = 0; i < m; i++) {
dis[i] = INT_MAX;
col[i] = INT_MAX;
previ[i] = -1;
vis[i] = false;
}
dis[1] = 0;
priority_queue<pair<int, double>, vector<pair<int, double> >, cmp> q;
q.push(make_pair(1, 0));
while (!q.empty()) {
pair<int, double> currPair = q.top();
q.pop();
int currVertex = currPair.first;
double currWeight = currPair.second;
if (vis[currVertex]) {
continue;
}
else if (currVertex == m - 1) {
break;
}
vis[currVertex] = true;
for (int i = 0; i < adj[currVertex].size(); i++) {
int nextVertex = adj[currVertex][i].first;
int nextEdgeCol = adj[currVertex][i].second;
int currEdgeCol = col[nextVertex];
if (!vis[nextVertex]) {
pair<int, int> newP;
if (currWeight + 1 < dis[nextVertex]) {
previ[nextVertex] = currVertex;
dis[nextVertex] = currWeight + 1;
col[nextVertex] = nextEdgeCol;
newP = make_pair(nextVertex, dis[nextVertex]);
}
else if (currWeight + 1 == dis[nextVertex]) {
if (col[nextVertex] > nextEdgeCol) {
previ[nextVertex] = currVertex;
dis[nextVertex] = currVertex + 1;
col[nextVertex] = nextEdgeCol;
newP = make_pair(nextVertex, dis[nextVertex]);
}
}
q.push(newP);
}
}
}
}
For further information on the problem look here.
I wanted to ask how to check if a group of numbers could be split into subgroups (every subgroup has to have 3 members) that every sum of subgroups' members would be equal. How to check so many combinations?
Example:
int numbers[] = {1, 2, 5, 6, 8, 3, 2, 4, 5};
can be divided into
{1, 5, 6}, {2, 8, 2}, {3, 4, 5}
A recursive approach can be followed, where one keeps two arrays:
An array with the sums of every subgroup.
A boolean array to check whether an element is already taken into
some subgroup or not.
You asked for 3 subgroups, i.e. K = 3 in the rest of this post, but keep in mind that when dealing with recursion, bases cases should be taken into account. In this case we will focus on two base cases:
If K is 1, then we already have our answer, complete array is only
subset with same sum.
If N < K, then it is not possible to divide array into subsets with
equal sum, because we can’t divide the array into more than N parts.
If the sum of group is not divisible by K, then it is not possible to divide it. We will only proceed if k divides sum. Our goal reduces to divide the group into K subgroups where sum of each subgroup should be the sum of the group divided by K.
In the code below a recursive method is written which tries to add array element into some subset. If sum of this subset reaches required sum, we iterate for next part recursively, otherwise we backtrack for different set of elements. If number of subsets whose sum reaches the required sum is (K-1), we flag that it is possible to partition array into K parts with equal sum, because remaining elements already have a sum equal to required sum.
Quoted from here, while in your case you would set K = 3, as in the example code.
// C++ program to check whether an array can be
// subsetitioned into K subsets of equal sum
#include <bits/stdc++.h>
using namespace std;
// Recursive Utility method to check K equal sum
// subsetition of array
/**
array - given input array
subsetSum array - sum to store each subset of the array
taken - boolean array to check whether element
is taken into sum subsetition or not
K - number of subsetitions needed
N - total number of element in array
curIdx - current subsetSum index
limitIdx - lastIdx from where array element should
be taken */
bool isKPartitionPossibleRec(int arr[], int subsetSum[], bool taken[],
int subset, int K, int N, int curIdx, int limitIdx)
{
if (subsetSum[curIdx] == subset)
{
/* current index (K - 2) represents (K - 1) subsets of equal
sum last subsetition will already remain with sum 'subset'*/
if (curIdx == K - 2)
return true;
// recursive call for next subsetition
return isKPartitionPossibleRec(arr, subsetSum, taken, subset,
K, N, curIdx + 1, N - 1);
}
// start from limitIdx and include elements into current subsetition
for (int i = limitIdx; i >= 0; i--)
{
// if already taken, continue
if (taken[i])
continue;
int tmp = subsetSum[curIdx] + arr[i];
// if temp is less than subset then only include the element
// and call recursively
if (tmp <= subset)
{
// mark the element and include into current subsetition sum
taken[i] = true;
subsetSum[curIdx] += arr[i];
bool nxt = isKPartitionPossibleRec(arr, subsetSum, taken,
subset, K, N, curIdx, i - 1);
// after recursive call unmark the element and remove from
// subsetition sum
taken[i] = false;
subsetSum[curIdx] -= arr[i];
if (nxt)
return true;
}
}
return false;
}
// Method returns true if arr can be subsetitioned into K subsets
// with equal sum
bool isKPartitionPossible(int arr[], int N, int K)
{
// If K is 1, then complete array will be our answer
if (K == 1)
return true;
// If total number of subsetitions are more than N, then
// division is not possible
if (N < K)
return false;
// if array sum is not divisible by K then we can't divide
// array into K subsetitions
int sum = 0;
for (int i = 0; i < N; i++)
sum += arr[i];
if (sum % K != 0)
return false;
// the sum of each subset should be subset (= sum / K)
int subset = sum / K;
int subsetSum[K];
bool taken[N];
// Initialize sum of each subset from 0
for (int i = 0; i < K; i++)
subsetSum[i] = 0;
// mark all elements as not taken
for (int i = 0; i < N; i++)
taken[i] = false;
// initialize first subsubset sum as last element of
// array and mark that as taken
subsetSum[0] = arr[N - 1];
taken[N - 1] = true;
if (subset < subsetSum[0])
return false;
// call recursive method to check K-subsetition condition
return isKPartitionPossibleRec(arr, subsetSum, taken,
subset, K, N, 0, N - 1);
}
// Driver code to test above methods
int main()
{
int arr[] = {2, 1, 4, 5, 3, 3};
int N = sizeof(arr) / sizeof(arr[0]);
int K = 3;
if (isKPartitionPossible(arr, N, K))
cout << "Partitions into equal sum is possible.\n";
else
cout << "Partitions into equal sum is not possible.\n";
}
Output:
Partitions into equal sum is possible.
Relevant links: 2 and 3.
You could just do something like that in this particular case (3x3):
const int COUNT = 9;
bool test(int const (&array)[COUNT], std::vector<std::vector<int>>* result) {
for(int _1=0; _1<COUNT-2; ++_1) {
for(int _2=1; _2<COUNT-1; ++_2) {
if(_2 == _1)
continue;
for(int _3=2; _3<COUNT; ++_3) {
if(_3 == _2 || _3 == _1)
continue;
std::vector<int> chosen1 {array[_1], array[_2], array[_3]};
std::vector<int> rest;
for(int _x = 0; _x < COUNT; ++_x) {
if(_x != _1 && _x != _2 && _x != _3) {
rest.push_back(array[_x]);
}
}
for (int _4 = 0; _4 < COUNT-5; ++_4) {
for (int _5 = 1; _5 < COUNT-4; ++_5) {
if(_5 == _4)
continue;
for (int _6 = 2; _6 < COUNT-3; ++_6) {
if(_6 == _5 || _6 == _4)
continue;
std::vector<int> chosen2 = {rest[_4], rest[_5], rest[_6]};
std::vector<int> chosen3;
for(int _x = 0; _x < COUNT-3; ++_x) {
if(_x != _4 && _x != _5 && _x != _6) {
chosen3.push_back(rest[_x]);
}
}
int total = std::accumulate(chosen1.begin(), chosen1.end(), 0);
if((std::accumulate(chosen2.begin(), chosen2.end(), 0) == total) &&
(std::accumulate(chosen3.begin(), chosen3.end(), 0) == total)) {
*result = {chosen1, chosen2, chosen3};
return true;
}
}
}
}
}
}
}
return false;
}
int main() {
int values[] = {1, 2, 5, 6, 8, 3, 2, 4, 5};
std::vector<std::vector<int>> result;
if(test(values, &result)) {
for(auto& x : result) {
std::cout << "{";
for(auto& y : x) {
std::cout << y << ",";
}
std::cout << "}";
}
std::cout << std::endl;
} else {
std::cout << "not found";
}
}
If you had longer array (3+ * 3) then you could use recurrence (you could use it in my example too), but that would be still very slow.
Given : A two dimensional array , values K and M
Problem : Find the maximum possible sum less than or equal K using all the rows (i.e there should be an element form each row) using exactly M elements.
This is a snippet of a program, I am having trouble implementing the conditions for each row and M.
for (int i = 0 ; i<n ; i++)
for (int s=0; s<M; s++)
for (int j=K;j>=0;j--)
if (dp[s][j] && A[i] + j < K)
dp[s + 1][j + A[i]] = true;
EDIT 1: Rows = M , i.e one element from each row has to be selected.
EDIT 2 : Dynamic Programming Solution, Thanks to #6502
ill ret(V(ill) col[101],ill prec[][101],ill s,ill row,ill m,ill k)
{
if(prec[s][row])
return prec[s][row];
else
{
if(row==m+1)
return s;
ill best=-1;
int j=row;
for(int i=0;i<col[j].size();i++)
{
if(s+col[j][i] <= k)
{
ill x = ret (col,prec,s+col[j][i],row+1,m,k);
if ((best==-1)||(x>best))
best=x;
}
}
prec[s][row]=best;
return best;
}
}
The problem can be solved using dynamic programming by choosing as state the pair (s, row) where s is the current sum and row is the next row we need to include.
The maximal principle is valid because no matter on which choices we made in previous rows the result depends only on the current sum and the current row index.
In code (Python)
cache = {}
data = [[2, 3, 4],
[2, 3, 4],
[2, 3, 4]]
M = 3
K = 10
def msum(s, row):
try:
return cache[s, row]
except KeyError:
if row == M:
return s
best = None
for v in data[row]:
if s+v <= K:
x = msum(s+v, row+1)
if best is None or x > best:
best = x
cache[s, row] = best
return best
print msum(0, 0)
The function returns None if no solution exists (i.e. if even taking the smallest value from each row we end up exceeding K).
A brute force approach:
bool increase(const std::vector<std::vector<int>>& v, std::vector<std::size_t>& it)
{
for (std::size_t i = 0, size = it.size(); i != size; ++i) {
const std::size_t index = size - 1 - i;
++it[index];
if (it[index] > v[index].size()) {
it[index] = 0;
} else {
return true;
}
}
return false;
}
int sum(const std::vector<std::vector<int>>& v, const std::vector<std::size_t>& it)
{
int res = 0;
for (std::size_t i = 0; i != it.size(); ++i) {
res += v[i][it[i]];
}
return res;
}
int maximum_sum_less_or_equal_to_K(const std::vector<std::vector<int>>& v, int K)
{
std::vector<std::size_t> it(v.size());
int res = K + 1;
do {
int current_sum = sum(v, it);
if (current_sum <= K) {
if (res == K + 1 || res < current_sum) {
res = current_sum;
}
}
} while (increase(v, it));
if (res == K + 1) {
// Handle no solution
}
return res;
}
it has the current selection of each row.
This can be solved using boolean 2D table. The value of dp[r][s] is set to true, if its possible to generate sum 's' , using exactly 'r' rows (i.e exactly one element from each of the [0 to r-1] rows). Using this dp table, we can compute next state as
dp[r+1][s] |= dp[r][s-A[r+1][c]] ; 0 < c < N, 0 < s <= K
where N is number of columns(0-based indexing). Finally return the value of max index set in M-1 row of dp table
Following is a bottom-up implementation
// Assuming input matrix is M*N
int maxSum() {
memset(dp, false, sizeof(dp));
//Initialise base row
for (int c = 0; c < N; ++c)
dp[0][A[0][c]] = true;
for ( int r = 1; r < M; ++r ) {
for ( int c = 0; c < N; ++c) {
// For each A[r][c], check for all possible values of sum upto K
for (int sum = 0; sum <= K; ++sum) {
if ( sum-A[r][c] >= 0 && dp[r-1][sum-A[r][c]] )
dp[r][sum] = true;
}
}
}
// Return max possible value <= K
for (int sum = K; sum >= 0; --sum) {
if ( dp[M-1][sum] )
return sum;
}
return 0;
}
Note that dp table values for current row depend only on previous row, as such space optimization trick can be used to solve it using 1-D table