How can I convert this recursive function to an iterative function?
#include <cmath>
int M(int H, int T){
if (H == 0) return T;
if (H + 1 >= T) return pow(2, T) - 1;
return M(H - 1, T - 1) + M(H, T - 1) + 1;
}
Well it's a 3-line code but it's very hard for me to convert this to an iterative function. Because it has 2 variables. And I don't know anything about Stacks so I couldn't convert that.
My purpose for doing this is speed of the function. This function is too slow. I wanted to use map to make this faster but I have 3 variables M, H and T so I couldn't use map
you could use dynamic programming - start from the bottom up when H == 0 and T == 0 calculate M and iterate them. here is a link explaining how to do this for Fibonacci numbers, which are quite similar to your problem.
Check this,recursive and not recursive versions gave equal results for all inputs i gave so far. The idea is to keep intermediate results in matrix, where H is row index, T is col index, and the value is M(H,T). By the way, you can calculate it once and later just obtain the result from the matrix, so you will have performance O(1)
int array[10][10]={{0}};
int MNR(int H, int T)
{
if(array[H][T])
return array[H][T];
for(int i =0; i<= H;++i)
{
for(int j = 0; j<= T;++j)
{
if(i == 0)
array[i][j] = j;
else if( i+1 > j)
array[i][j] = pow(2,j) -1;
else
array[i][j] = array[i-1][j-1] + array[i][j-1] + 1;
}
}
return array[H][T];
}
int M(int H, int T)
{
if (H == 0) return T;
if (H + 1 >= T) return pow(2, T) - 1;
return M(H - 1, T - 1) + M(H, T - 1) + 1;
}
int main()
{
printf("%d\n", M(6,3));
printf("%d\n", MNR(6,3));
}
Unless you know the formula for n-th (in your case, (m,n)-th) element of the sequence, the easiest way is to simulate the recursion using a stack.
The code should look like the following:
#include <cmath>
#include <stack>
struct Data
{
public:
Data(int newH, int newT)
: T(newT), H(newH)
{
}
int H;
int T;
};
int M(int H, int T)
{
std::stack<Data> st;
st.push(Data(H, T));
int sum = 0;
while (st.size() > 0)
{
Data top = st.top();
st.pop();
if (top.H == 0)
sum += top.T;
else if (top.H + 1 >= top.T)
sum += pow(2, top.T) - 1;
else
{
st.push(Data(top.H - 1, top.T - 1));
st.push(Data(top.H, top.T - 1));
sum += 1;
}
}
return sum;
}
The main reason why this function is slow is because it has exponential complexity, and it keeps recalculating the same members again and again. One possible cure is memoize pattern (handily explained with examples in C++ here). The idea is to store every result in a structure with a quick access (e.g. an array) and every time you need it again, retrieve already precomputed result. Of course, this approach is limited by the size of your memory, so it won't work for extremely big numbers...
In your case, we could do something like that (keeping the recursion but memoizing the results):
#include <cmath>
#include <map>
#include <utility>
std::map<std::pair<int,int>,int> MM;
int M(int H, int T){
std::pair<int,int> key = std::make_pair(H,T);
std::map<std::pair<int,int>,int>::iterator found = MM.find(key);
if (found!=MM.end()) return found->second; // skip the calculations if we can
int result = 0;
if (H == 0) result = T;
else if (H + 1 >= T) result = pow(2, T) - 1;
else result = M(H - 1, T - 1) + M(H, T - 1) + 1;
MM[key] = result;
return result;
}
Regarding time complexity, C++ maps are tree maps, so searching there is of the order of N*log(N) where N is the size of the map (number of results which have been already computed). There are also hash maps for C++ which are part of the STL but not part of the standard library, as was already mentioned on SO. Hash map promises constant search time (the value of the constant is not specified though :) ), so you might also give them a try.
You may calculate using one demintional array. Little theory,
Let F(a,b) == M(H,T)
1. F(0,b) = b
2. F(a,b) = 2^b - 1, when a+1 >= b
3. F(a,b) = F(a-1,b-1) + F(a,b-1) + 1
Let G(x,y) = F(y,x) ,then
1. G(x,0) = x // RULE (1)
2. G(x,y) = 2^x - 1, when y+1 >= x // RULE (2)
3. G(x,y) = G(x-1,y-1) + G(x-1,y) + 1 // RULE(3) --> this is useful,
// because for G(x,y) need only G(x-1,?), i.e if G - is two deminsions array, then
// for calculating G[x][?] need only previous row G[x-1][?],
// so we need only last two rows of array.
// Here some values of G(x,y)
4. G(0,y) = 2^0 - 1 = 0 from (2) rule.
5. G(1,0) = 1 from (1) rule.
6. G(1,y) = 2^1 - 1 = 1, when y > 0, from (2) rule.
G(0,0) = 0, G(0,1) = 0, G(0,2) = 0, G(0,3) = 0 ...
G(1,0) = 1, G(1,1) = 1, G(1,2) = 1, G(1,3) = 1 ...
7. G(2,0) = 2 from (1) rule
8. G(2,1) = 2^2 - 1 = 3 from (2) rule
9. G(2,y) = 2^2 - 1 = 3 when y > 0, from (2) rule.
G(2,0) = 2, G(2,1) = 3, G(2,2) = 3, G(2,3) = 3, ....
10. G(3,0) = 3 from (1) rule
11. G(3,1) = G(2,0) + G(2,1) + 1 = 2 + 3 + 1 = 6 from (3) rule
12. G(3,2) = 2^3 - 1 = 7, from (2) rule
Now, how to calculate this G(x,y)
int M(int H, int T ) { return G(T,H); }
int G(int x, int y)
{
const int MAX_Y = 100; // or something else
int arr[2][MAX_Y] = {0} ;
int icurr = 0, inext = 1;
for(int xi = 0; xi < x; ++xi)
{
for( int yi = 0; yi <= y ;++yi)
{
if ( yi == 0 )
arr[inext][yi] = xi; // rule (1);
else if ( yi + 1 >= xi )
arr[inext][yi] = (1 << xi) - 1; // rule ( 2 )
else arr[inext][yi] =
arr[icurr][yi-1] + arr[icurr][yi] + 1; // rule (3)
}
icurr ^= 1; inext ^= 1; //swap(i1,i2);
}
return arr[icurr][y];
}
// Or some optimizing
int G(int x, int y)
{
const int MAX_Y = 100;
int arr[2][MAX_Y] = {0};
int icurr = 0, inext = 1;
for(int ix = 0; ix < x; ++ix)
{
arr[inext][0] = ix; // rule (1)
for(int iy = 1; iy < ix - 1; ++ iy)
arr[inext][iy] = arr[icurr][iy-1] + arr[icurr][iy] + 1; // rule (3)
for(int iy = max(0,ix-1); iy <= y; ++iy)
arr[inext][iy] = (1 << ix ) - 1; // rule(2)
icurr ^= 1 ; inext ^= 1;
}
return arr[icurr][y];
}
Related
I have to convert this recursive algorithm into an iterative one:
int alg(int A[], int x, int y, int k){
int val = 0;
if (x <= y){
int z = (x+y)/2;
if(A[z] == k){
val = 1;
}
int a = alg(A,x,z-1,k);
int b;
if(a > val){
b = alg(A,z+1,y,k);
}else{
b = a + val;
}
val += a + b;
}
return val;
}
I tried with a while loop, but I can't figure out how I can calculate "a" and "b" variables, so I did this:
int algIterative(int A[], int x, int y, int k){
int val = 0;
while(x <= y){
int z = (x+y)/2;
if(A[z] == k){
val = 1;
}
y = z-1;
}
}
But actually I couldn't figure out what this algorithm does.
My questions are:
What does this algorithm do?
How can I convert it to iterative?
Do I need to use stacks?
Any help will be appreciated.
I am not sure that alg computes anything useful.
It processes the part of the array A between the indexes x and y, and computes a kind of counter.
If the interval is empty, the returned value (val) is 0. Otherwise, if the middle element of this subarray equals k, val is set to 1. Then the values for the left and right subarrays are added and the total is returned. So in a way, it counts the number of k's in the array.
But, if the count on the left side is found to be not larger than val, i.e. 0 if val = 0 or 0 or 1 if val = 1, the value on the right is evaluated as the value on the left + val.
Derecursivation might be possible without a stack. If you look at the sequence of subintervals that are traversed, you can reconstruct it from the binary representation of N. Then the result of the function is the accumulation of partials results collected along a postorder process.
If the postorder can be turned to inorder, this will reduce to a single linear pass over A. This is a little technical.
A simple way could be smt like this with the aid of a two dimensional array:
int n = A.length;
int[][]dp = new int[n][n];
for(int i = n - 1;i >= 0; i--){
for(int j = i; j < n; j++){
// This part is almost similar to the recursive part.
int z = (i+j)/2;
int val = 0;
if(A[z] == k){
val = 1;
}
int a = z > i ? dp[i][z - 1] : 0;
int b;
if(a > val){
b = (z + 1 <= j) ? dp[z + 1][j] : 0;
}else{
b = a + val;
}
val += a + b;
dp[i][j] = val;
}
}
return dp[0][n - 1];
Explanation:
Notice that for i, it is decreasing, and j, it is increasing, so, when calculate dp[x][y], you need dp[x][z - 1] (with z - 1 < j) and dp[z + 1][j] (with z >= i), and those values should already be populated.
I tried this Codility test: MinAbsSum.
https://codility.com/programmers/lessons/17-dynamic_programming/min_abs_sum/
I solved the problem by searching the whole tree of possibilities. The results were OK, however, my solution failed due to timeout for large input. In other words the time complexity was not as good as expected. My solution is O(nlogn), something normal with trees. But this coding test was in the section "Dynamic Programming", and there must be some way to improve it. I tried with summing the whole set first and then using this information, but always there is something missing in my solution. Does anybody have an idea on how to improve my solution using DP?
#include <vector>
using namespace std;
int sum(vector<int>& A, size_t i, int s)
{
if (i == A.size())
return s;
int tmpl = s + A[i];
int tmpr = s - A[i];
return min (abs(sum(A, i+1, tmpl)), abs(sum(A, i+1, tmpr)));
}
int solution(vector<int> &A) {
return sum(A, 0, 0);
}
I could not solve it. But here's the official answer.
Quoting it:
Notice that the range of numbers is quite small (maximum 100). Hence,
there must be a lot of duplicated numbers. Let count[i] denote the
number of occurrences of the value i. We can process all occurrences
of the same value at once. First we calculate values count[i] Then we
create array dp such that:
dp[j] = −1 if we cannot get the sum j,
dp[j] >= 0 if we can get sum j.
Initially, dp[j] = -1 for all of j (except dp[0] = 0). Then we scan
through all the values a appearing in A; we consider all a such
that count[a]>0. For every such a we update dp that dp[j] denotes
how many values a remain (maximally) after achieving sum j. Note
that if the previous value at dp[j] >= 0 then we can set dp[j] =
count[a] as no value a is needed to obtain the sum j. Otherwise we
must obtain sum j-a first and then use a number a to get sum j. In
such a situation dp[j] = dp[j-a]-1. Using this algorithm, we can
mark all the sum values and choose the best one (closest to half of S,
the sum of abs of A).
def MinAbsSum(A):
N = len(A)
M = 0
for i in range(N):
A[i] = abs(A[i])
M = max(A[i], M)
S = sum(A)
count = [0] * (M + 1)
for i in range(N):
count[A[i]] += 1
dp = [-1] * (S + 1)
dp[0] = 0
for a in range(1, M + 1):
if count[a] > 0:
for j in range(S):
if dp[j] >= 0:
dp[j] = count[a]
elif (j >= a and dp[j - a] > 0):
dp[j] = dp[j - a] - 1
result = S
for i in range(S // 2 + 1):
if dp[i] >= 0:
result = min(result, S - 2 * i)
return result
(note that since the final iteration only considers sums up until S // 2 + 1, we can save some space and time by only creating a DP Cache up until that value as well)
The Java answer provided by fladam returns wrong result for input [2, 3, 2, 2, 3], although it gets 100% score.
Java Solution
import java.util.Arrays;
public class MinAbsSum{
static int[] dp;
public static void main(String args[]) {
int[] array = {1, 5, 2, -2};
System.out.println(findMinAbsSum(array));
}
public static int findMinAbsSum(int[] A) {
int arrayLength = A.length;
int M = 0;
for (int i = 0; i < arrayLength; i++) {
A[i] = Math.abs(A[i]);
M = Math.max(A[i], M);
}
int S = sum(A);
dp = new int[S + 1];
int[] count = new int[M + 1];
for (int i = 0; i < arrayLength; i++) {
count[A[i]] += 1;
}
Arrays.fill(dp, -1);
dp[0] = 0;
for (int i = 1; i < M + 1; i++) {
if (count[i] > 0) {
for(int j = 0; j < S; j++) {
if (dp[j] >= 0) {
dp[j] = count[i];
} else if (j >= i && dp[j - i] > 0) {
dp[j] = dp[j - i] - 1;
}
}
}
}
int result = S;
for (int i = 0; i < Math.floor(S / 2) + 1; i++) {
if (dp[i] >= 0) {
result = Math.min(result, S - 2 * i);
}
}
return result;
}
public static int sum(int[] array) {
int sum = 0;
for(int i : array) {
sum += i;
}
return sum;
}
}
I invented another solution, better than the previous one. I do not use recursion any more.
This solution works OK (all logical tests passed), and also passed some of the performance tests, but not all. How else can I improve it?
#include <vector>
#include <set>
using namespace std;
int solution(vector<int> &A) {
if (A.size() == 0) return 0;
set<int> sums, tmpSums;
sums.insert(abs(A[0]));
for (auto it = begin(A) + 1; it != end(A); ++it)
{
for (auto s : sums)
{
tmpSums.insert(abs(s + abs(*it)));
tmpSums.insert(abs(s - abs(*it)));
}
sums = tmpSums;
tmpSums.clear();
}
return *sums.begin();
}
This solution (in Java) scored 100% for both (correctness and performance)
public int solution(int[] a){
if (a.length == 0) return 0;
if (a.length == 1) return a[0];
int sum = 0;
for (int i=0;i<a.length;i++){
sum += Math.abs(a[i]);
}
int[] indices = new int[a.length];
indices[0] = 0;
int half = sum/2;
int localSum = Math.abs(a[0]);
int minLocalSum = Integer.MAX_VALUE;
int placeIndex = 1;
for (int i=1;i<a.length;i++){
if (localSum<half){
if (Math.abs(2*minLocalSum-sum) > Math.abs(2*localSum - sum))
minLocalSum = localSum;
localSum += Math.abs(a[i]);
indices[placeIndex++] = i;
}else{
if (localSum == half)
return Math.abs(2*half - sum);
if (Math.abs(2*minLocalSum-sum) > Math.abs(2*localSum - sum))
minLocalSum = localSum;
if (placeIndex > 1) {
localSum -= Math.abs(a[indices[placeIndex--]]);
i = indices[placeIndex];
}
}
}
return (Math.abs(2*minLocalSum - sum));
}
this solution treats all elements like they are positive numbers and it's looking to reach as close as it can to the sum of all elements divided by 2 (in that case we know that the sum of all other elements will be the same delta far from the half too -> abs sum will be minimum possible ).
it does so by starting with the first element and successively adding others to the "local" sum (and recording indices of elements in the sum) until it reaches sum of x >= sumAll/2. if that x is equal to sumAll/2 we have an optimal solution. if not, we go step back in the indices array and continue picking other element where last iteration in that position ended. the result will be a "local" sum having abs((sumAll - sum) - sum) closest to 0;
fixed solution:
public static int solution(int[] a){
if (a.length == 0) return 0;
if (a.length == 1) return a[0];
int sum = 0;
for (int i=0;i<a.length;i++) {
a[i] = Math.abs(a[i]);
sum += a[i];
}
Arrays.sort(a);
int[] arr = a;
int[] arrRev = new int[arr.length];
int minRes = Integer.MAX_VALUE;
for (int t=0;t<=4;t++) {
arr = fold(arr);
int res1 = findSum(arr, sum);
if (res1 < minRes) minRes = res1;
rev(arr, arrRev);
int res2 = findSum(arrRev, sum);
if (res2 < minRes) minRes = res2;
arrRev = fold(arrRev);
int res3 = findSum(arrRev, sum);
if (res3 < minRes) minRes = res3;
}
return minRes;
}
private static void rev(int[] arr, int[] arrRev){
for (int i = 0; i < arrRev.length; i++) {
arrRev[i] = arr[arr.length - 1 - i];
}
}
private static int[] fold(int[] a){
int[] arr = new int[a.length];
for (int i=0;a.length/2+i/2 < a.length && a.length/2-i/2-1 >= 0;i+=2){
arr[i] = a[a.length/2+i/2];
arr[i+1] = a[a.length/2-i/2-1];
}
if (a.length % 2 > 0) arr[a.length-1] = a[a.length-1];
else{
arr[a.length-2] = a[0];
arr[a.length-1] = a[a.length-1];
}
return arr;
}
private static int findSum(int[] arr, int sum){
int[] indices = new int[arr.length];
indices[0] = 0;
double half = Double.valueOf(sum)/2;
int localSum = Math.abs(arr[0]);
int minLocalSum = Integer.MAX_VALUE;
int placeIndex = 1;
for (int i=1;i<arr.length;i++){
if (localSum == half)
return 2*localSum - sum;
if (Math.abs(2*minLocalSum-sum) > Math.abs(2*localSum - sum))
minLocalSum = localSum;
if (localSum<half){
localSum += Math.abs(arr[i]);
indices[placeIndex++] = i;
}else{
if (placeIndex > 1) {
localSum -= Math.abs(arr[indices[--placeIndex]]);
i = indices[placeIndex];
}
}
}
return Math.abs(2*minLocalSum - sum);
}
The following is a rendering of the official answer in C++ (scoring 100% in task, correctness, and performance):
#include <cmath>
#include <algorithm>
#include <numeric>
using namespace std;
int solution(vector<int> &A) {
// write your code in C++14 (g++ 6.2.0)
const int N = A.size();
int M = 0;
for (int i=0; i<N; i++) {
A[i] = abs(A[i]);
M = max(M, A[i]);
}
int S = accumulate(A.begin(), A.end(), 0);
vector<int> counts(M+1, 0);
for (int i=0; i<N; i++) {
counts[A[i]]++;
}
vector<int> dp(S+1, -1);
dp[0] = 0;
for (int a=1; a<M+1; a++) {
if (counts[a] > 0) {
for (int j=0; j<S; j++) {
if (dp[j] >= 0) {
dp[j] = counts[a];
} else if ((j >= a) && (dp[j-a] > 0)) {
dp[j] = dp[j-a]-1;
}
}
}
}
int result = S;
for (int i =0; i<(S/2+1); i++) {
if (dp[i] >= 0) {
result = min(result, S-2*i);
}
}
return result;
}
You are almost 90% to the actual solution. It seems you understand recursion very well. Now, You should apply dynamic programming here with your program.
Dynamic Programming is nothing but memoization to the recursion so that we will not calculate same sub problems again and again. If same sub problems encounter , we return the previously calculated and memorized value. Memorization can be done with the help of a 2D array , say dp[][], where first state represent current index of array and second state represent summation.
For this problem specific, instead of giving calls to both states from each state, you sometimes can greedily take decision to skip one call.
I would like to provide the algorithm and then my implementation in C++. Idea is more or less the same as the official codility solution with some constant optimisation added.
Calculate the maximum absolute element of the inputs.
Calculate the absolute sum of the inputs.
Count the number of occurrence of each number in the inputs. Store the results in a vector hash.
Go through each input.
For each input, goes through all possible sums of any number of inputs. It is a slight constant optimisation to go only up to half of the possible sums.
For each sum that has been made before, set the occurrence count of the current input.
Check for each potential sum equal to or greater than the current input whether this input has already been used before. Update the values at the current sum accordingly. We do not need to check for potential sums less than the current input in this iteration, since it is evident that it has not been used before.
The above nested loop will fill in each possible sum with a value greater than -1.
Go through this possible sum hash again to look for the closest sum to half that is possible to make. Eventually, the min abs sum will be the difference of this from the half multiplied by two as the difference will be added up in both groups as the difference from the median.
The runtime complexity of this algorithm is O(N * max(abs(A)) ^ 2), or simply O(N * M ^ 2). That is because the outer loop is iterating M times and the inner loop is iterating sum times. The sum is basically N * M in worst case. Therefore, it is O(M * N * M).
The space complexity of this solution is O(N * M) because we allocate a hash of N items for the counts and a hash of S items for the sums. S is N * M again.
int solution(vector<int> &A)
{
int M = 0, S = 0;
for (const int e : A) { M = max(abs(e), M); S += abs(e); }
vector<int> counts(M + 1, 0);
for (const int e : A) { ++counts[abs(e)]; }
vector<int> sums(S + 1, -1);
sums[0] = 0;
for (int ci = 1; ci < counts.size(); ++ci) {
if (!counts[ci]) continue;
for (int si = 0; si < S / 2 + 1; ++si) {
if (sums[si] >= 0) sums[si] = counts[ci];
else if (si >= ci and sums[si - ci] > 0) sums[si] = sums[si - ci] - 1;
}
}
int min_abs_sum = S;
for (int i = S / 2; i >= 0; --i) if (sums[i] >= 0) return S - 2 * i;
return min_abs_sum;
}
Let me add my 50 cent, how to come up with the score 100% solution.
For me it was hard to understand the ultimate solution, proposed earlier in this thread.
So I started with warm-up solution with score 63%, because its O(NxNxM),
and because it doesn't use the fact that M is quite small value, and there are many duplicates in big arrays
here the key part is to understand how array isSumPossible is filled and interpreted:
how to fill array isSumPossible using numbers in input array:
if isSumPossible[sum] >= 0, i.e. sum is already possible, even without current number, then let's set it's value to 1 - count of current number, that is left unused for this sum, it'll go to our "reserve", so we can use it later for greater sums.
if (isSumPossible[sum] >= 0) {
isSumPossible[sum] = 1;
}
if isSumPossible[sum] <= 0, i.e. sum is considered not yet possible, with all input numbers considered previously, then let's check maybe
smaller sum sum - number is already considered as possible, and we have in "reserve" our current number (isSumPossible[sum - number] == 1), then do following
else if (sum >= number && isSumPossible[sum - number] == 1) {
isSumPossible[sum] = 0;
}
here isSumPossible[sum] = 0 means that we have used number in composing sum and it's now considered as possible (>=0), but we have no number in "reserve", because we've used it ( =0)
how to interpret filled array isSumPossible after considering all numbers in input array:
if isSumPossible[sum] >= 0 then the sum is possible, i.e. it can be reached by summation of some numbers in given array
if isSumPossible[sum] < 0 then the sum can't be reached by summation of any numbers in given array
The more simple thing here is to understand why we are searching sums only in interval [0, maxSum/2]:
because if find a possible sum, that is very close to maxSum/2,
ideal case here if we've found possible sum = maxSum/2,
if so, then it's obvious, that we can somehow use the rest numbers in input array to make another maxSum/2, but now with negative sign, so as a result of annihilation we'll get solution = 0, because maxSum/2 + (-1)maxSum/2 = 0.
But 0 the best case solution, not always reachable.
But we, nevertheless, should seek for the minimal delta = ((maxSum - sum) - sum),
so this we seek for delta -> 0, that's why we have this:
int result = Integer.MAX_VALUE;
for (int sum = 0; sum < maxSum / 2 + 1; sum++) {
if (isSumPossible[sum] >= 0) {
result = Math.min(result, (maxSum - sum) - sum);
}
}
warm-up solution
public int solution(int[] A) {
if (A == null || A.length == 0) {
return 0;
}
if (A.length == 1) {
return A[0];
}
int maxSum = 0;
for (int i = 0; i < A.length; i++) {
A[i] = Math.abs(A[i]);
maxSum += A[i];
}
int[] isSumPossible = new int[maxSum + 1];
Arrays.fill(isSumPossible, -1);
isSumPossible[0] = 0;
for (int number : A) {
for (int sum = 0; sum < maxSum / 2 + 1; sum++) {
if (isSumPossible[sum] >= 0) {
isSumPossible[sum] = 1;
} else if (sum >= number && isSumPossible[sum - number] == 1) {
isSumPossible[sum] = 0;
}
}
}
int result = Integer.MAX_VALUE;
for (int sum = 0; sum < maxSum / 2 + 1; sum++) {
if (isSumPossible[sum] >= 0) {
result = Math.min(result, maxSum - 2 * sum);
}
}
return result;
}
and after this we can optimize it, using the fact that there are many duplicate numbers in big arrays, and we come up with the solution with 100% score, its O(Mx(NxM)), because maxSum = NxM at worst case
public int solution(int[] A) {
if (A == null || A.length == 0) {
return 0;
}
if (A.length == 1) {
return A[0];
}
int maxNumber = 0;
int maxSum = 0;
for (int i = 0; i < A.length; i++) {
A[i] = Math.abs(A[i]);
maxNumber = Math.max(maxNumber, A[i]);
maxSum += A[i];
}
int[] count = new int[maxNumber + 1];
for (int i = 0; i < A.length; i++) {
count[A[i]]++;
}
int[] isSumPossible = new int[maxSum + 1];
Arrays.fill(isSumPossible, -1);
isSumPossible[0] = 0;
for (int number = 0; number < maxNumber + 1; number++) {
if (count[number] > 0) {
for (int sum = 0; sum < maxSum / 2 + 1; sum++) {
if (isSumPossible[sum] >= 0) {
isSumPossible[sum] = count[number];
} else if (sum >= number && isSumPossible[sum - number] > 0) {
isSumPossible[sum] = isSumPossible[sum - number] - 1;
}
}
}
}
int result = Integer.MAX_VALUE;
for (int sum = 0; sum < maxSum / 2 + 1; sum++) {
if (isSumPossible[sum] >= 0) {
result = Math.min(result, maxSum - 2 * sum);
}
}
return result;
}
I hope I've made it at least a little clear
Kotlin solution
Time complexity: O(N * max(abs(A))**2)
Score: 100%
import kotlin.math.*
fun solution(A: IntArray): Int {
val N = A.size
var M = 0
for (i in 0 until N) {
A[i] = abs(A[i])
M = max(M, A[i])
}
val S = A.sum()
val counts = MutableList(M + 1) { 0 }
for (i in 0 until N) {
counts[A[i]]++
}
val dp = MutableList(S + 1) { -1 }
dp[0] = 0
for (a in 1 until M + 1) {
if (counts[a] > 0) {
for (j in 0 until S) {
if (dp[j] >= 0) {
dp[j] = counts[a]
} else if (j >= a && dp[j - a] > 0) {
dp[j] = dp[j - a] - 1
}
}
}
}
var result = S
for (i in 0 until (S / 2 + 1)) {
if (dp[i] >= 0) {
result = minOf(result, S - 2 * i)
}
}
return result
}
I am trying to implement my own square root function which gives square root's integral part only e.g. square root of 3 = 1.
I saw the method here and tried to implement the method
int mySqrt(int x)
{
int n = x;
x = pow(2, ceil(log(n) / log(2)) / 2);
int y=0;
while (y < x)
{
y = (x + n / x) / 2;
x = y;
}
return x;
}
The above method fails for input 8. Also, I don't get why it should work.
Also, I tried the method here
int mySqrt(int x)
{
if (x == 0) return 0;
int x0 = pow(2, (log(x) / log(2))/2) ;
int y = x0;
int diff = 10;
while (diff>0)
{
x0 = (x0 + x / x0) / 2; diff = y - x0;
y = x0;
if (diff<0) diff = diff * (-1);
}
return x0;
}
In this second way, for input 3 the loop continues ... indefinitely (x0 toggles between 1 and 2).
I am aware that both are essentially versions of Netwon's method but I can't figure out why they fail in certain cases and how could I make them work for all cases. I guess i have the correct logic in implementation. I debugged my code but still I can't find a way to make it work.
This one works for me:
uintmax_t zsqrt(uintmax_t x)
{
if(x==0) return 0;
uintmax_t yn = x; // The 'next' estimate
uintmax_t y = 0; // The result
uintmax_t yp; // The previous estimate
do{
yp = y;
y = yn;
yn = (y + x/y) >> 1; // Newton step
}while(yn ^ yp); // (yn != yp) shortcut for dumb compilers
return y;
}
returns floor(sqrt(x))
Instead of testing for 0 with a single estimate, test with 2 estimates.
When I was writing this, I noticed the result estimate would sometimes oscillate. This is because, if the exact result is a fraction, the algorithm could only jump between the two nearest values. So, terminating when the next estimate is the same as the previous will prevent an infinite loop.
Try this
int n,i;//n is the input number
i=0;
while(i<=n)
{
if((i*i)==n)
{
cout<<"The number has exact root : "<<i<<endl;
}
else if((i*i)>n)
{
cout<<"The integer part is "<<(i-1)<<endl;
}
i++;
}
Hope this helps.
You can try there C sqrt implementations :
// return the number that was multiplied by itself to reach N.
unsigned square_root_1(const unsigned num) {
unsigned a, b, c, d;
for (b = a = num, c = 1; a >>= 1; ++c);
for (c = 1 << (c & -2); c; c >>= 2) {
d = a + c;
a >>= 1;
if (b >= d)
b -= d, a += c;
}
return a;
}
// return the number that was multiplied by itself to reach N.
unsigned square_root_2(unsigned n){
unsigned a = n > 0, b;
if (n > 3)
for (a = n >> 1, b = (a + n / a) >> 1; b < a; a = b, b = (a + n / a) >> 1);
return a ;
}
Example of usage :
#include <assert.h>
int main(void){
unsigned num, res ;
num = 1847902954, res = square_root_1(num), assert(res == 42987);
num = 2, res = square_root_2(num), assert(res == 1);
num = 0, res = square_root_2(num), assert(res == 0);
}
Source
This is a problem I have been struggling for a week, coming back just to give up after wasted hours...
I am supposed to find coefficents for the following Laguerre polynomial:
P0(x) = 1
P1(x) = 1 - x
Pn(x) = ((2n - 1 - x) / n) * P(n-1) - ((n - 1) / n) * P(n-2)
I believe there is an error in my implementation, because for some reason the coefficents I get seem way too big. This is the output this program generates:
a1 = -190.234
a2 = -295.833
a3 = 378.283
a4 = -939.537
a5 = 774.861
a6 = -400.612
Description of code (given below):
If you scroll the code down a little to the part where I declare array, you'll find given x's and y's.
The function polynomial just fills an array with values of said polynomial for certain x. It's a recursive function. I believe it works well, because I have checked the output values.
The gauss function finds coefficents by performing Gaussian elimination on output array. I think this is where the problems begin. I am wondering, if there's a mistake in this code or perhaps my method of veryfying results is bad? I am trying to verify them like that:
-190.234 * 1.5 ^ 5 - 295.833 * 1.5 ^ 4 ... - 400.612 = -3017,817625 =/= 2
Code:
#include "stdafx.h"
#include <conio.h>
#include <iostream>
#include <iomanip>
#include <math.h>
using namespace std;
double polynomial(int i, int j, double **tab)
{
double n = i;
double **array = tab;
double x = array[j][0];
if (i == 0) {
return 1;
} else if (i == 1) {
return 1 - x;
} else {
double minusone = polynomial(i - 1, j, array);
double minustwo = polynomial(i - 2, j, array);
double result = (((2.0 * n) - 1 - x) / n) * minusone - ((n - 1.0) / n) * minustwo;
return result;
}
}
int gauss(int n, double tab[6][7], double results[7])
{
double multiplier, divider;
for (int m = 0; m <= n; m++)
{
for (int i = m + 1; i <= n; i++)
{
multiplier = tab[i][m];
divider = tab[m][m];
if (divider == 0) {
return 1;
}
for (int j = m; j <= n; j++)
{
if (i == n) {
break;
}
tab[i][j] = (tab[m][j] * multiplier / divider) - tab[i][j];
}
for (int j = m; j <= n; j++) {
tab[i - 1][j] = tab[i - 1][j] / divider;
}
}
}
double s = 0;
results[n - 1] = tab[n - 1][n];
int y = 0;
for (int i = n-2; i >= 0; i--)
{
s = 0;
y++;
for (int x = 0; x < n; x++)
{
s = s + (tab[i][n - 1 - x] * results[n-(x + 1)]);
if (y == x + 1) {
break;
}
}
results[i] = tab[i][n] - s;
}
}
int _tmain(int argc, _TCHAR* argv[])
{
int num;
double **array;
array = new double*[5];
for (int i = 0; i <= 5; i++)
{
array[i] = new double[2];
}
//i 0 1 2 3 4 5
array[0][0] = 1.5; //xi 1.5 2 2.5 3.5 3.8 4.1
array[0][1] = 2; //yi 2 5 -1 0.5 3 7
array[1][0] = 2;
array[1][1] = 5;
array[2][0] = 2.5;
array[2][1] = -1;
array[3][0] = 3.5;
array[3][1] = 0.5;
array[4][0] = 3.8;
array[4][1] = 3;
array[5][0] = 4.1;
array[5][1] = 7;
double W[6][7]; //n + 1
for (int i = 0; i <= 5; i++)
{
for (int j = 0; j <= 5; j++)
{
W[i][j] = polynomial(j, i, array);
}
W[i][6] = array[i][1];
}
for (int i = 0; i <= 5; i++)
{
for (int j = 0; j <= 6; j++)
{
cout << W[i][j] << "\t";
}
cout << endl;
}
double results[6];
gauss(6, W, results);
for (int i = 0; i < 6; i++) {
cout << "a" << i + 1 << " = " << results[i] << endl;
}
_getch();
return 0;
}
I believe your interpretation of the recursive polynomial generation either needs revising or is a bit too clever for me.
given P[0][5] = {1,0,0,0,0,...}; P[1][5]={1,-1,0,0,0,...};
then P[2] is a*P[0] + convolution(P[1], { c, d });
where a = -((n - 1) / n)
c = (2n - 1)/n and d= - 1/n
This can be generalized: P[n] == a*P[n-2] + conv(P[n-1], { c,d });
In every step there is involved a polynomial multiplication with (c + d*x), which increases the degree by one (just by one...) and adding to P[n-1] multiplied with a scalar a.
Then most likely the interpolation factor x is in range [0..1].
(convolution means, that you should implement polynomial multiplication, which luckily is easy...)
[a,b,c,d]
* [e,f]
------------------
af,bf,cf,df +
ae,be,ce,de, 0 +
--------------------------
(= coefficients of the final polynomial)
The definition of P1(x) = x - 1 is not implemented as stated. You have 1 - x in the computation.
I did not look any further.
Im trying to implement the Miller-Rabin primality test according to the description in FIPS 186-3 C.3.1. No matter what I do, I cannot get it to work. The instructions are pretty specific, and I dont think I missed anything, and yet Im getting true for non-prime values.
What did I do wrong?
template <typename R, typename S, typename T>
T POW(R base, S exponent, const T mod){
T result = 1;
while (exponent){
if (exponent & 1)
result = (result * base) % mod;
exponent >>= 1;
base = (base * base) % mod;
}
return result;
}
// used uint64_t to prevent overflow, but only testing with small numbers for now
bool MillerRabin_FIPS186(uint64_t w, unsigned int iterations = 50){
srand(time(0));
unsigned int a = 0;
uint64_t W = w - 1; // dont want to keep calculating w - 1
uint64_t m = W;
while (!(m & 1)){
m >>= 1;
a++;
}
// skipped getting wlen
// when i had this function using my custom arbitrary precision integer class,
// and could get len(w), getting it and using it in an actual RBG
// made no difference
for(unsigned int i = 0; i < iterations; i++){
uint64_t b = (rand() % (W - 3)) + 2; // 2 <= b <= w - 2
uint64_t z = POW(b, m, w);
if ((z == 1) || (z == W))
continue;
else
for(unsigned int j = 1; j < a; j++){
z = POW(z, 2, w);
if (z == W)
continue;
if (z == 1)
return 0;// Composite
}
}
return 1;// Probably Prime
}
this:
std::cout << MillerRabin_FIPS186(33) << std::endl;
std::cout << MillerRabin_FIPS186(35) << std::endl;
std::cout << MillerRabin_FIPS186(37) << std::endl;
std::cout << MillerRabin_FIPS186(39) << std::endl;
std::cout << MillerRabin_FIPS186(45) << std::endl;
std::cout << MillerRabin_FIPS186(49) << std::endl;
is giving me:
0
1
1
1
0
1
The only difference between your implementation and Wikipedia's is that you forgot the second return composite statement. You should have a return 0 at the end of the loop.
Edit: As pointed out by Daniel, there is a second difference. The continue is continuing the inner loop, rather than the outer loop like it's supposed to.
for(unsigned int i = 0; i < iterations; i++){
uint64_t b = (rand() % (W - 3)) + 2; // 2 <= b <= w - 2
uint64_t z = POW(b, m, w);
if ((z == 1) || (z == W))
continue;
else{
int continueOuter = 0;
for(unsigned int j = 1; j < a; j++){
z = POW(z, 2, w);
if (z == W)
continueOuter = 1;
break;
if (z == 1)
return 0;// Composite
}
if (continueOuter) {continue;}
}
return 0; //This is the line you're missing.
}
return 1;// Probably Prime
Also, if the input is even, it will always return probably prime since a is 0. You should add an extra check at the start for that.
In the inner loop,
for(unsigned int j = 1; j < a; j++){
z = POW(z, 2, w);
if (z == W)
continue;
if (z == 1)
return 0;// Composite
}
you should break; instead of continue; when z == W. By continueing, in the next iteration of that loop, if there is one, z will become 1 and the candidate is possibly wrongly declared composite. Here, that happens for 17, 41, 73, 89 and 97 among the primes less than 100.