How to find N points on an infinite axis so that sum of distances from M points to its nearest N is smallest? - c++

Consider there are N houses on a single road. I have M lightpoles. Given that M < N. Distance between all adjacent houses are different. Lightpole can be placed at the house only. And I have to place all lightpoles at house so that sum of distances from each house to its nearest lightpole is smallest. How can I code this problem?
After a little research I came to know that I have to use dynamic programming for this problem. But I don't know how to approach it to this problem.

Here's a naive dynamic program with search space O(n^2 * m). Perhaps others know of another speedup? The recurrence should be clear from the function f in the code.
JavaScript code:
// We can calculate these in O(1)
// by using our prefixes (ps) and
// the formula for a subarray, (j, i),
// reaching for a pole at i:
//
// ps[i] - ps[j-1] - (A[i] - A[j-1]) * j
//
// Examples:
// A: [1,2,5,10]
// ps: [0,1,7,22]
// (2, 3) =>
// 22 - 1 - (10 - 2) * 2
// = 5
// = 10-5
// (1, 3) =>
// 22 - 0 - (10 - 1) * 1
// = 13
// = 10-5 + 10-2
function sumParts(A, j, i, isAssigned){
let result = 0
for (let k=j; k<=i; k++){
if (isAssigned)
result += Math.min(A[k] - A[j], A[i] - A[k])
else
result += A[k] - A[j]
}
return result
}
function f(A, ps, i, m, isAssigned){
if (m == 1 && isAssigned)
return ps[i]
const start = m - (isAssigned ? 2 : 1)
const _m = m - (isAssigned ? 1 : 0)
let result = Infinity
for (let j=start; j<i; j++)
result = Math.min(
result,
sumParts(A, j, i, isAssigned)
+ f(A, ps, j, _m, true)
)
return result
}
var A = [1, 2, 5, 10]
var m = 2
var ps = [0]
for (let i=1; i<A.length; i++)
ps[i] = ps[i-1] + (A[i] - A[i-1]) * i
var result = Math.min(
f(A, ps, A.length - 1, m, true),
f(A, ps, A.length - 1, m, false))
console.log(`A: ${ JSON.stringify(A) }`)
console.log(`ps: ${ JSON.stringify(ps) }`)
console.log(`m: ${ m }`)
console.log(`Result: ${ result }`)

I got you covered bud. I will write to explain the dynamic programming algorithm first and if you are not able to code it, let me know.
A-> array containing points so that A[i]-A[i-1] will be the distance between A[i] and A[i-1]. A[0] is the first point. When you are doing memoization top-down, you will have to handle cases when you would want to place a light pole at the current house or you would want to place it at a lower index. If you place it now, you recurse with one less light pole available and calculate the sum of distances with previous houses. You handle the base case when you are not left with any ligh pole or you are done with all the houses.

Related

My segment tree update function doesn't work properly

The problem:
In this task, you need to write a regular segment tree for the sum.
Input The first line contains two integers n and m (1≤n,m≤100000), the
size of the array and the number of operations. The next line contains
n numbers a_i, the initial state of the array (0≤a_i≤10^9). The following
lines contain the description of the operations. The description of
each operation is as follows:
1 i v: set the element with index i to v (0≤i<n, 0≤v≤10^9).
2 l r:
calculate the sum of elements with indices from l to r−1 (0≤l<r≤n).
Output
For each operation of the second type print the corresponding
sum.
I'm trying to implement segment tree and all my functions works properly except for the update function:
void update(int i, int delta, int v = 0, int tl = 0, int tr = n - 1)
{
if (tl == i && tr == i)
t[v] += delta;
else if (tl <= i && i <= tr)
{
t[v] += delta;
int m = (tl + tr) / 2;
int left = 2 * v + 1;
int right = left + 1;
update(i, delta, left, tl, m);
update(i, delta, right, m + 1, tr);
}
}
I got WA on segment tree problem, meanwhile with this update function I got accepted:
void update(int i, int new_value, int v = 0, int tl = 0, int tr = n - 1)
{
if (tl == i && tr == i)
t[v] = new_value;
else if (tl <= i && i <= tr)
{
int m = (tl + tr) / 2;
int left = 2 * v + 1;
int right = left + 1;
update(i, new_value, left, tl, m);
update(i, new_value, right, m + 1, tr);
t[v] = t[left] + t[right];
}
}
I really don't understand why my first version is not working. I thought maybe I had some kind of overflowing problem and decided to change everything to long longs, but it didn't help, so the problem in the algorithm of updating itself. But it seems ok to me. For every segment that includes i I need to add sum of this segment to some delta (it can be negative, if for example I had number 5 and decided to change it to 3, then delta will be -2). So what's the problem? I really don't see it :(
There are 2 problems with your first solution:
The question expects you to do a point update. The condition (tl == i && tr == i) checks if you are the leaf node of the tree.
At leaf node, you have to actually replace the value instead of adding something into it, which you did for the second solution.
Secondly, you can only update the non-leaf nodes after all its child nodes are updated. Updating t[v] before making recursive call will anyways result into wrong answer.

Using series to approximate log(2)

double k = 0;
int l = 1;
double digits = pow(0.1, 5);
do
{
k += (pow(-1, l - 1)/l);
l++;
} while((log(2)-k)>=digits);
I'm trying to write a little program based on an example I seen using a series of Σ_(l=1) (pow(-1, l - 1)/l) to estimate log(2);
It's supposed to be a guess refinement thing where time it gets closer and closer to the right value until so many digits match.
The above is what I tried but but it's not coming out right. After messing with it for quite a while I can't figure out where I'm messing up.
I assume that you are trying to extimate the natural logarithm of 2 by its Taylor series expansion:
∞ (-1)n + 1
ln(x) = ∑ ――――――――(x - 1)n
n=1 n
One of the problems of your code is the condition choosen to stop the iterations at a specified precision:
do { ... } while((log(2)-k)>=digits);
Besides using log(2) directly (aren't you supposed to find it out instead of using a library function?), at the second iteration (and for every other even iteration) log(2) - k gets negative (-0.3068...) ending the loop.
A possible (but not optimal) fix could be to use std::abs(log(2) - k) instead, or to end the loop when the absolute value of 1.0 / l (which is the difference between two consecutive iterations) is small enough.
Also, using pow(-1, l - 1) to calculate the sequence 1, -1, 1, -1, ... Is really a waste, especially in a series with such a slow convergence rate.
A more efficient series (see here) is:
∞ 1
ln(x) = 2 ∑ ――――――― ((x - 1) / (x + 1))2n + 1
n=0 2n + 1
You can extimate it without using pow:
double x = 2.0; // I want to calculate ln(2)
int n = 1;
double eps = 0.00001,
kpow = (x - 1.0) / (x + 1.0),
kpow2 = kpow * kpow,
dk,
k = 2 * kpow;
do {
n += 2;
kpow *= kpow2;
dk = 2 * kpow / n;
k += dk;
} while ( std::abs(dk) >= eps );

Efficient C/C++ algorithm on 2-dimensional max-sum window

I have a c[N][M] matrix where I apply a max-sum operation over a (K+1)² window. I am trying to reduce the complexity of the naive algorithm.
In particular, here's my code snippet in C++:
<!-- language: cpp -->
int N,M,K;
std::cin >> N >> M >> K;
std::pair< unsigned , unsigned > opt[N][M];
unsigned c[N][M];
// Read values for c[i][j]
// Initialize all opt[i][j] at (0,0).
for ( int i = 0; i < N; i ++ ) {
for ( int j = 0; j < M ; j ++ ) {
unsigned max = 0;
int posX = i, posY = j;
for ( int ii = i; (ii >= i - K) && (ii >= 0); ii -- ) {
for ( int jj = j; (jj >= j - K) && (jj >= 0); jj -- ) {
// Ignore the (i,j) position
if (( ii == i ) && ( jj == j )) {
continue;
}
if ( opt[ii][jj].second > max ) {
max = opt[ii][jj].second;
posX = ii;
posY = jj;
}
}
}
opt[i][j].first = opt[posX][posY].second;
opt[i][j].second = c[i][j] + opt[posX][posY].first;
}
}
The goal of the algorithm is to compute opt[N-1][M-1].
Example: for N = 4, M = 4, K = 2 and:
c[N][M] = 4 1 1 2
6 1 1 1
1 2 5 8
1 1 8 0
... the result should be opt[N-1][M-1] = {14, 11}.
The running complexity of this snippet is however O(N M K²). My goal is to reduce the running time complexity. I have already seen posts like this, but it appears that my "filter" is not separable, probably because of the sum operation.
More information (optional): this is essentially an algorithm which develops the optimal strategy in a "game" where:
Two players lead a single team in a N × M dungeon.
Each position of the dungeon has c[i][j] gold coins.
Starting position: (N-1,M-1) where c[N-1][M-1] = 0.
The active player chooses the next position to move the team to, from position (x,y).
The next position can be any of (x-i, y-j), i <= K, j <= K, i+j > 0. In other words, they can move only left and/or up, up to a step K per direction.
The player who just moved the team gets the coins in the new position.
The active player alternates each turn.
The game ends when the team reaches (0,0).
Optimal strategy for both players: maximize their own sum of gold coins, if they know that the opponent is following the same strategy.
Thus, opt[i][j].first represents the coins of the player who will now move from (i,j) to another position. opt[i][j].second represents the coins of the opponent.
Here is a O(N * M) solution.
Let's fix the lower row(r). If the maximum for all rows between r - K and r is known for every column, this problem can be reduced to a well-known sliding window maximum problem. So it is possible to compute the answer for a fixed row in O(M) time.
Let's iterate over all rows in increasing order. For each column the maximum for all rows between r - K and r is the sliding window maximum problem, too. Processing each column takes O(N) time for all rows.
The total time complexity is O(N * M).
However, there is one issue with this solution: it does not exclude the (i, j) element. It is possible to fix it by running the algorithm described above twice(with K * (K + 1) and (K + 1) * K windows) and then merging the results(a (K + 1) * (K + 1) square without a corner is a union of two rectangles with K * (K + 1) and (K + 1) * K size).

Cut rectangle in minimum number of squares

I'm trying to solve the following problem:
A rectangular paper sheet of M*N is to be cut down into squares such that:
The paper is cut along a line that is parallel to one of the sides of the paper.
The paper is cut such that the resultant dimensions are always integers.
The process stops when the paper can't be cut any further.
What is the minimum number of paper pieces cut such that all are squares?
Limits: 1 <= N <= 100 and 1 <= M <= 100.
Example: Let N=1 and M=2, then answer is 2 as the minimum number of squares that can be cut is 2 (the paper is cut horizontally along the smaller side in the middle).
My code:
cin >> n >> m;
int N = min(n,m);
int M = max(n,m);
int ans = 0;
while (N != M) {
ans++;
int x = M - N;
int y = N;
M = max(x, y);
N = min(x, y);
}
if (N == M && M != 0)
ans++;
But I am not getting what's wrong with this approach as it's giving me a wrong answer.
I think both the DP and greedy solutions are not optimal. Here is the counterexample for the DP solution:
Consider the rectangle of size 13 X 11. DP solution gives 8 as the answer. But the optimal solution has only 6 squares.
This thread has many counter examples: https://mathoverflow.net/questions/116382/tiling-a-rectangle-with-the-smallest-number-of-squares
Also, have a look at this for correct solution: http://int-e.eu/~bf3/squares/
I'd write this as a dynamic (recursive) program.
Write a function which tries to split the rectangle at some position. Call the function recursively for both parts. Try all possible splits and take the one with the minimum result.
The base case would be when both sides are equal, i.e. the input is already a square, in which case the result is 1.
function min_squares(m, n):
// base case:
if m == n: return 1
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
return min { min_hor, min_ver }
To improve performance, you can cache the recursive results:
function min_squares(m, n):
// base case:
if m == n: return 1
// check if we already cached this
if cache contains (m, n):
return cache(m, n)
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
// put in cache and return
result := min { min_hor, min_ver }
cache(m, n) := result
return result
In a concrete C++ implementation, you could use int cache[100][100] for the cache data structure since your input size is limited. Put it as a static local variable, so it will automatically be initialized with zeroes. Then interpret 0 as "not cached" (as it can't be the result of any inputs).
Possible C++ implementation: http://ideone.com/HbiFOH
The greedy algorithm is not optimal. On a 6x5 rectangle, it uses a 5x5 square and 5 1x1 squares. The optimal solution uses 2 3x3 squares and 3 2x2 squares.
To get an optimal solution, use dynamic programming. The brute-force recursive solution tries all possible horizontal and vertical first cuts, recursively cutting the two pieces optimally. By caching (memoizing) the value of the function for each input, we get a polynomial-time dynamic program (O(m n max(m, n))).
This problem can be solved using dynamic programming.
Assuming we have a rectangle with width is N and height is M.
if (N == M), so it is a square and nothing need to be done.
Otherwise, we can divide the rectangle into two other smaller one (N - x, M) and (x,M), so it can be solved recursively.
Similarly, we can also divide it into (N , M - x) and (N, x)
Pseudo code:
int[][]dp;
boolean[][]check;
int cutNeeded(int n, int m)
if(n == m)
return 1;
if(check[n][m])
return dp[n][m];
check[n][m] = true;
int result = n*m;
for(int i = 1; i <= n/2; i++)
int tmp = cutNeeded(n - i, m) + cutNeeded(i,m);
result = min(tmp, result);
for(int i = 1; i <= m/2; i++)
int tmp = cutNeeded(n , m - i) + cutNeeded(n,i);
result = min(tmp, result);
return dp[n][m] = result;
Here is a greedy impl. As #David mentioned it is not optimal and is completely wrong some cases so dynamic approach is the best (with caching).
def greedy(m, n):
if m == n:
return 1
if m < n:
m, n = n, m
cuts = 0
while n:
cuts += m/n
m, n = n, m % n
return cuts
print greedy(2, 7)
Here is DP attempt in python
import sys
def cache(f):
db = {}
def wrap(*args):
key = str(args)
if key not in db:
db[key] = f(*args)
return db[key]
return wrap
#cache
def squares(m, n):
if m == n:
return 1
xcuts = sys.maxint
ycuts = sys.maxint
x, y = 1, 1
while x * 2 <= n:
xcuts = min(xcuts, squares(m, x) + squares(m, n - x))
x += 1
while y * 2 <= m:
ycuts = min(ycuts, squares(y, n) + squares(m - y, n))
y += 1
return min(xcuts, ycuts)
This is essentially classic integer or 0-1 knapsack problem that can be solved using greedy or dynamic programming approach. You may refer to: Solving the Integer Knapsack

Enumeration all possible matrices with constraints

I'm attempting to enumerate all possible matrices of size r by r with a few constraints.
Row and column sums must be in non-ascending order.
Starting from the top left element down the main diagonal, each row and column subset from that entry must be made up of combinations with replacements from 0 to the value in that upper left entry (inclusive).
The row and column sums must all be less than or equal to a predetermined n value.
The main diagonal must be in non-ascending order.
Important note is that I need every combination to be store somewhere, or if written in c++, to be ran through another few functions after finding them
r and n are values that range from 2 to say 100.
I've tried a recursive way to do this, along with an iterative, but keep getting hung up on keeping track column and row sums, along with all the data in a manageable sense.
I have attached my most recent attempt (which is far from completed), but may give you an idea of what is going on.
The function first_section(): builds row zero and column zero correctly, but other than that I don't have anything successful.
I need more than a push to get this going, the logic is a pain in the butt, and is swallowing me whole. I need to have this written in either python or C++.
import numpy as np
from itertools import combinations_with_replacement
global r
global n
r = 4
n = 8
global myarray
myarray = np.zeros((r,r))
global arraysums
arraysums = np.zeros((r,2))
def first_section():
bigData = []
myarray = np.zeros((r,r))
arraysums = np.zeros((r,2))
for i in reversed(range(1,n+1)):
myarray[0,0] = i
stuff = []
stuff = list(combinations_with_replacement(range(i),r-1))
for j in range(len(stuff)):
myarray[0,1:] = list(reversed(stuff[j]))
arraysums[0,0] = sum(myarray[0,:])
for k in range(len(stuff)):
myarray[1:,0] = list(reversed(stuff[k]))
arraysums[0,1] = sum(myarray[:,0])
if arraysums.max() > n:
break
bigData.append(np.hstack((myarray[0,:],myarray[1:,0])))
if printing: print 'myarray \n%s' %(myarray)
return bigData
def one_more_section(bigData,index):
newData = []
for item in bigData:
if printing: print 'item = %s' %(item)
upperbound = int(item[index-1]) # will need to have logic worked out
if printing: print 'upperbound = %s' % (upperbound)
for i in reversed(range(1,upperbound+1)):
myarray[index,index] = i
stuff = []
stuff = list(combinations_with_replacement(range(i),r-1))
for j in range(len(stuff)):
myarray[index,index+1:] = list(reversed(stuff[j]))
arraysums[index,0] = sum(myarray[index,:])
for k in range(len(stuff)):
myarray[index+1:,index] = list(reversed(stuff[k]))
arraysums[index,1] = sum(myarray[:,index])
if arraysums.max() > n:
break
if printing: print 'index = %s' %(index)
newData.append(np.hstack((myarray[index,index:],myarray[index+1:,index])))
if printing: print 'myarray \n%s' %(myarray)
return newData
bigData = first_section()
bigData = one_more_section(bigData,1)
A possible matrix could look like this:
r = 4, n >= 6
|3 2 0 0| = 5
|3 2 0 0| = 5
|0 0 2 1| = 3
|0 0 0 1| = 1
6 4 2 2
Here's a solution in numpy and python 2.7. Note that all the rows and columns are in non-increasing order, because you only specified that they should be combinations with replacement, and not their sortedness (and generating combinations is the simplest with sorted lists).
The code could be optimized somewhat by keeping row and column sums around as arguments instead of recomputing them.
import numpy as np
r = 2 #matrix dimension
maxs = 5 #maximum sum of row/column
def generate(r, maxs):
# We create an extra row and column for the starting "dummy" values.
# Filling in the matrix becomes much simpler when we do not have to treat cells with
# one or two zero indices in special way. Thus, we start iteration from the
# (1, 1) index.
m = np.zeros((r + 1, r + 1), dtype = np.int32)
m[0] = m[:,0] = maxs + 1
def go(n, i, j):
# If we completely filled the matrix, yield a copy of the non-dummy parts.
if (i, j) == (r, r):
yield m[1:, 1:].copy()
return
# We compute the next indices in row major order (the choice is arbitrary).
(i2, j2) = (i + 1, 1) if j == r else (i, j + 1)
# Computing the maximum possible value for the current cell.
max_val = min(
maxs - m[i, 1:].sum(),
maxs - m[1:, j].sum(),
m[i, j-1],
m[i-1, j])
for n2 in xrange(max_val, -1, -1):
m[i, j] = n2
for matrix in go(n2, i2, j2):
yield matrix
return go(maxs, 1, 1) #note that this is a generator object
# testing
for matrix in generate(r, maxs):
print
print matrix
If you'd like to have all the valid permutations in the rows and columns, this code below should work.
def generate(r, maxs):
m = np.zeros((r + 1, r + 1), dtype = np.int32)
rows = [0]*(r+1) # We avoid recomputing row/col sums on each cell.
cols = [0]*(r+1)
rows[0] = cols[0] = m[0, 0] = maxs
def go(i, j):
if (i, j) == (r, r):
yield m[1:, 1:].copy()
return
(i2, j2) = (i + 1, 1) if j == r else (i, j + 1)
max_val = min(rows[i-1] - rows[i], cols[j-1] - cols[j])
if i == j:
max_val = min(max_val, m[i-1, j-1])
if (i, j) != (1, 1):
max_val = min(max_val, m[1, 1])
for n in xrange(max_val, -1, -1):
m[i, j] = n
rows[i] += n
cols[j] += n
for matrix in go(i2, j2):
yield matrix
rows[i] -= n
cols[j] -= n
return go(1, 1)