Minimum Coin Change Problem(Top-Down Approach) - c++

I have coded a top-down approach to solve the famous minimum coin change problem as shown in the code below. But the code runs into segmentation fault when the money is close to 44000. I have three varieties of coin {1,4,5}. I don't know what is going on? I suspect I am running out of stack memory. But 44000 seems a small value. So, I tested it on an online IDE. But it seems to works perfectly there. I am running my code on NetBeans 8.2(on a laptop of 8GB RAM). Please help me
Following is the snippet of my function:
//A top-down approach
int change_tpd(int m, vector<int>&coins, vector<int>&dp)
{
if(dp[m]!=-1)
return dp[m];
else if(m==0)
dp[m] = 0;
else
{
int x = INT_MAX;
for(int i=0;i<coins.size();++i)
{
if(m-coins[i]>=0)
x = min(x,change_tpd(m-coins[i],coins,dp));
}
dp[m] = 1+x;
}
return dp[m];
}

Maybe you could reduce the depth of the search tree by doing something like this:
Say that you have a set C = {c1, ..., cn} of N coins sorted by decreasing order of their values, with respective weights X = {x1, ..., xn}
We are trying to minimize Sum_{1<=i<=N}{xi} where Prod_{1<=i<=N}{ci*xi} = V
Given that we are trying to minimum the sum of xi, any solution of form S=...+cixi+...+cjxj+... where xj>=ci and i<j is dominated by a solution of form S'=...+ci(xi+cj)+...+cj(xj-ci)+...
By extention the form of a dominant solution is xj<(ci-1) for every 1<=i<j<=N, and even more restrictive xj<(ci/gcd(ci,cj))-1 for every 1<=i<j<=N (does anyone get the reference for this or for anything else below?)
We obtain that way the upper bound vector U of X with xi<=ui for every 1<=i<=N.
In vector U, u1 is infinite and all the others values are bounded; as the result we can easily compute the maximum value Z = \Sum_{2<=i<=N}{ci*ui} we can make without using any coin c1 for a dominant solution
By extension the resolution of V>Z can be reduce to the resolution of V' = V - ip((V-Z+c1-1)/c1)*c1 with ip(r) the integer part of r, and by increasing the result by ip((V-Z+c1-1)/c1).
In your instance, C = {5, 4, 1}, Z = 19;
V = 44000 > Z, V' = 44000 - ip((44000-17)/5)*5 = 15, and the result is incremented by 8797
This also makes the DP array way smaller.

Related

What's the time complexity of for (int i = 2; i < n; i = i*i)?

What would be the time complexity of the following loop?
for (int i = 2; i < n; i = i * i) {
++a;
}
While practicing runtime complexities, I came across this code and can't find the answer. I thought this would be sqrt(n), though it doesn't seem correct, since the loop has the sequence of 2, 4, 16, 256, ....
To understand the answer you must understand that: Inverse of Exponent is not SQRT, but log is.
This loop is multiplying i by itself(.i.e. exponential increment) and will stop only when i >= n, therefore the complexity would be O(log(n)) (log to the base 2 to be precise because i=2 at initialization)
To illustrate this:
In the above image, you can see that SQRT is giving correct number of steps only when i is a even power of 2. However log2 is giving accurate number of steps everytime.
Each time i is powered by 2. Hence, if A(n) shows the current value of i in the last step (which is n), it can be written in a recursive for like the following (suppose n is power of 2):
A(n) = A(n-1)^2
Now, you can expand it to find a pattern:
A(n) = A(n-2)^4 = A(n-3)^8 = ... = A(n-(n-1))^(2^(n-1)) = 2^(2^(n-1))
Hence, the loop iterates k step such that n = 2 ^ (2^ (k-1)). Therefore, this loop iterates Theta(log(log(n)).

Return a random object from a list based on proprieties

This is quite a strange issue for me because I can't visualize my problem correctly. Just so that you know, I'm not really asking for code but just for an idea to write an approriate alogirthm that would generate some weather based on their probability of occuring.
Here's what I want to achieve :
Let's say I have a WeatherClass, with a parameter called "Probability". I want to have different weather instances with their own probability of "happening".
enum Probability {
Never = -1,
Low = 0,
Normal = 1,
Always = 2
};
std::vector<WeatherClass> WeatherContainer;
WeatherClass Sunny = WeatherClass();
Sunny.Probability = Probability.Normal;
WeatherClass Rainy = WeatherClass();
Rainy.Probability = Probability.Low;
WeatherClass Cloudy = WeatherClass();
Cloudy.Probability = Probability.Normal;
WeatherContainer.push_back(Sunny);
WeatherContainer.push_back(Rainy);
WeatherContainer.push_back(Cloudy);
Now, my question is : what is the most clever way to return some weather based on its own probability of happening?
I don't know why but I can't figure this out.. My first guess would be to have some kind of "luck" variable and compare it with the probability of each element or something similar.
Any hint or advice would be really helpful.
Greets,
required
Generally speaking, assuming you have an integer sequence of numbers representing a linear increase in probability (starting from 1, not 0!):
1,2,3,4,5,6...n
Marking pn for some specific integer (weather in your scheme, say "6"), and the sum of all the enum integers Sn, a linear probability could easily be defined as:
pn/Sn
This of course means the weather associated with "1" is least likely, and the one with "n" is most likely. Other schemes are possible, such as exponential - just need to normalize properly. Also, if you forgot your math:
Sn=(1+n)*n/2
Now you need to roll from this probability. One option, disregarding efficiency, to help you think about this:
Make a giant set, where each weather (or integer) appears as many times as the associated integer. 1 appears once, ..., n appears n times. This list is of size Sn by definition. Now use the random library:
int choice = rand() % Sn; #index between 0 and Sn-1 - chosen probability indicator.
You could of course randomize the list as well for extra randomness.
An example: in our array we have probmap={1,2,2,3,3,3}. If choice==4, then probmap[4]==3. Suppose 3 corresponds to Sunny, then we have our result!. There are of course ways to make this better, choose different probability functions etc. but I think this is a good start.
You can generate a random number between 0 and 3, subtract 1, cast it to Probability and search your vector for a matching entry.
auto result = rand();
result %= 4;
--result;
auto prob = (Probability)result;
auto index = -1;
for( auto I = 0 ; I < WeatherContainer.size() ; ++I )
if( WeatherContainer [ I ].Probability == prob )
{
index = I;
break;
}
if( index != -1 )
{
// Do your thing
}

Generate N random numbers within a range with a constant sum

I want to generate N random numbers drawn from a specif distribution (e.g uniform random) between [a,b] which sum to a constant C. I have tried a couple of solutions I could think of myself, and some proposed on similar threads but most of them either work for a limited form of problem or I can't prove the outcome still follows the desired distribution.
What I have tried:
Generage N random numbers, divide all of them by the sum of them and multiply by the desired constant. This seems to work but the result does not follow the rule that the numbers should be within [a:b].
Generage N-1 random numbers add 0 and desired constant C and sort them. Then calculate the difference between each two consecutive nubmers and the differences are the result. This again sums to C but have the same problem of last method(the range can be bigger than [a:b].
I also tried to generate random numbers and always keep track of min and max in a way that the desired sum and range are kept and come up with this code:
bool generate(function<int(int,int)> randomGenerator,int min,int max,int len,int sum,std::vector<int> &output){
/**
* Not possible to produce such a sequence
*/
if(min*len > sum)
return false;
if(max*len < sum)
return false;
int curSum = 0;
int left = sum - curSum;
int leftIndexes = len-1;
int curMax = left - leftIndexes*min;
int curMin = left - leftIndexes*max;
for(int i=0;i<len;i++){
int num = randomGenerator((curMin< min)?min:curMin,(curMax>max)?max:curMax);
output.push_back(num);
curSum += num;
left = sum - curSum;
leftIndexes--;
curMax = left - leftIndexes*min;
curMin = left - leftIndexes*max;
}
return true;
}
This seems to work but the results are sometimes very skewed and I don't think it's following the original distribution (e.g. uniform). E.g:
//10 numbers within [1:10] which sum to 50:
generate(uniform,1,10,10,50,output);
//result:
2,7,2,5,2,10,5,8,4,5 => sum=50
//This looks reasonable for uniform, but let's change to
//10 numbers within [1:25] which sum to 50:
generate(uniform,1,25,10,50,output);
//result:
24,12,6,2,1,1,1,1,1,1 => sum= 50
Notice how many ones exist in the output. This might sound reasonable because the range is larger. But they really don't look like a uniform distribution.
I am not sure even if it is possible to achieve what I want, maybe the constraints are making the problem not solvable.
In case you want the sample to follow a uniform distribution, the problem reduces to generate N random numbers with sum = 1. This, in turn, is a special case of the Dirichlet distribution but can also be computed more easily using the Exponential distribution. Here is how:
Take a uniform sample v1 … vN with all vi between 0 and 1.
For all i, 1<=i<=N, define ui := -ln vi (notice that ui > 0).
Normalize the ui as pi := ui/s where s is the sum u1+...+uN.
The p1..pN are uniformly distributed (in the simplex of dim N-1) and their sum is 1.
You can now multiply these pi by the constant C you want and translate them by summing some other constant A like this
qi := A + pi*C.
EDIT 3
In order to address some issues raised in the comments, let me add the following:
To ensure that the final random sequence falls in the interval [a,b] choose the constants A and C above as A := a and C := b-a, i.e., take qi = a + pi*(b-a). Since pi is in the range (0,1) all qi will be in the range [a,b].
One cannot take the (negative) logarithm -ln(vi) if vi happens to be 0 because ln() is not defined at 0. The probability of such an event is extremely low. However, in order to ensure that no error is signaled the generation of v1 ... vN in item 1 above must threat any occurrence of 0 in a special way: consider -ln(0) as +infinity (remember: ln(x) -> -infinity when x->0). Thus the sum s = +infinity, which means that pi = 1 and all other pj = 0. Without this convention the sequence (0...1...0) would never be generated (many thanks to #Severin Pappadeux for this interesting remark.)
As explained in the 4th comment attached to the question by #Neil Slater it is logically impossible to fulfill all the requirements of the original framing. Therefore any solution must relax the constraints to a proper subset of the original ones. Other comments by #Behrooz seem to confirm that this would suffice in this case.
EDIT 2
One more issue has been raised in the comments:
Why rescaling a uniform sample does not suffice?
In other words, why should I bother to take negative logarithms?
The reason is that if we just rescale then the resulting sample won't distribute uniformly across the segment (0,1) (or [a,b] for the final sample.)
To visualize this let's think 2D, i.e., let's consider the case N=2. A uniform sample (v1,v2) corresponds to a random point in the square with origin (0,0) and corner (1,1). Now, when we normalize such a point dividing it by the sum s=v1+v2 what we are doing is projecting the point onto the diagonal as shown in the picture (keep in mind that the diagonal is the line x + y = 1):
But given that green lines, which are closer to the principal diagonal from (0,0) to (1,1), are longer than orange ones, which are closer to the axes x and y, the projections tend to accumulate more around the center of the projection line (in blue), where the scaled sample lives. This shows that a simple scaling won't produce a uniform sample on the depicted diagonal. On the other hand, it can be proven mathematically that the negative logarithms do produce the desired uniformity. So, instead of copypasting a mathematical proof I would invite everyone to implement both algorithms and check that the resulting plots behave as this answer describes.
(Note: here is a blog post on this interesting subject with an application to the Oil & Gas industry)
Let's try to simplify the problem.
By substracting the lower bound, we can reduce it to finding N numbers in [0,b-a] such that their sum is C-Na.
Renaming the parameters, we can look for N numbers in [0,m] whose sum is S.
Now the problem is akin to partitioning a segment of length S in N distinct sub-segments of length [0,m].
I think the problem is simply not solvable.
if S=1, N=1000 and m anything above 0, the only possible repartition is one 1 and 999 zeroes, which is nothing like a random spread.
There is a correlation between N, m and S, and even picking random values will not make it disappear.
For the most uniform repartition, the length of the sub-segments will follow a gaussian curve with a mean value of S/N.
If you tweak your random numbers differently, you will end up with whatever bias, but in the end you will never have both a uniform [a,b] repartition and a total length of C, unless the length of your [a,b] interval happens to be 2C/N-a.
For my answer I'll assume that we have a uniform distribution.
Since we have a uniform distribution, every tuple of C has the same probability to occur. For example for a = 2, b = 2, C = 12, N = 5 we have 15 possible tuples. From them 10 start with 2, 4 start with 3 and 1 starts with 4. This gives the idea of selecting a random number from 1 to 15 in order to choose the first element. From 1 to 10 we select 2, from 11 to 14 we select 3 and for 15 we select 4. Then we continue recursively.
#include <time.h>
#include <random>
std::default_random_engine generator(time(0));
int a = 2, b = 4, n = 5, c = 12, numbers[5];
// Calculate how many combinations of n numbers have sum c
int calc_combinations(int n, int c) {
if (n == 1) return (c >= a) && (c <= b);
int sum = 0;
for (int i = a; i <= b; i++) sum += calc_combinations(n - 1, c - i);
return sum;
}
// Chooses a random array of n elements having sum c
void choose(int n, int c, int *numbers) {
if (n == 1) { numbers[0] = c; return; }
int combinations = calc_combinations(n, c);
std::uniform_int_distribution<int> distribution(0, combinations - 1);
int s = distribution(generator);
int sum = 0;
for (int i = a; i <= b; i++) {
if ((sum += calc_combinations(n - 1, c - i)) > s) {
numbers[0] = i;
choose(n - 1, c - i, numbers + 1);
return;
}
}
}
int main() { choose(n, c, numbers); }
Possible outcome:
2
2
3
2
3
This algorithm won't scale well for large N because of overflows in the calculation of combinations (unless we use a big integer library), the time needed for this calculation and the need for arbitrarily large random numbers.
well, for n=10000 cant we have a small number in there that is not random?
maybe generating sequence till sum > C-max reached and then just put one simple number to sum it up.
1 in 10000 is more like a very small noise in the system.
Although this was old topic but I think I got a idea. Consider we want N random number which sum is C and each random between a and b. To solve problem, we create N holes and prepare C balls, for each time we ask each hole "Do you want another ball?". If no, we pass to next hole, else, we put a ball into the hole. Each hole has a cap value: b-a. If some hole reach the cap value then always pass to next hole.
Example:
3 random numbers between 0 and 2 which sum is 5.
simulation result:
1st run: -+-
2nd run: ++-
3rd run: ---
4th run: +*+
final:221
-:refuse ball
+:accept ball
*:full pass

Finding the remainder of a large multiplication in C++

I would like to ask some help concerning the following problem. I need to create a function that will multiply two integers and extract the remainder of this multiplication divided by a certain number (in short, (x*y)%A).
I am using unsigned long long int for this problem, but A = 15! in this case, and both x and y have been calculated modulo A previously. Thus, x*y can be greater than 2^64 - 1, therefore overflowing.
I did not want to use external libraries. Could anyone help me designing a short algorithm to solve this problem?
Thanks in advance.
If you already have mod A of x and y, why not use them? something like,
if,
x = int_x*A + mod_x
y = int_y*A + mod_y
then
(x*y)%A = ((int_x*A + mod_x)(int_y*A + mod_y))%A = (mod_x*mod_y)%A
mod_x*mod_y should be much smaller, right?
EDIT:
If you are trying to find the modulus wrt a large number like 10e11, I guess you would have to use another method. But while not really efficient, something like this would work
const int MAX_INT = 10e22 // get max int
int larger = max(mod_x, mod_y) // get the larger number
int smaller = max(mod_x, mod_y)
int largest_part = floor(MAX_INT/smaller)
if (largest_part > larger):
// no prob of overflow. use normal routine
else:
int larger_array = []
while(largest_part < larger):
larger_array.append(largest_part)
larger -= largest_part
largest_part = floor(MAX_INT/smaller)
// now use the parts array to calculate the mod by going through each elements mod and adding them etc
If you understand this code and the setup, you should be able to figure out the rest

Algorithm for finding the maximum number of non-overlapping lines on the x axis

I'm not exactly sure how to ask this, but I'll try to be as specific as possible.
Imagine a tetris screen with only rectangles, of different shapes, falling to the bottom.
I want to compute the maximum number of rectangles that I can fit one next to the other without any overlapping ones. I've named them lines in the title because I'm actually only interested in the length of the rectangle when computing, or the line parallel to the x axis that it's falling towards.
So basically I have a custom type with a start and end, both integers between 0 and 100. Say we have a list of these rectangles ranging from 1 to n. rectangle_n.start (unless it's the rectangle closest to the origin) has to be > rectangle_(n-1).end so that they will never overlap.
I'm reading the rectangle coordinates (both are x axis coordinates) from a file with random numbers.
As an example:
consider this list of rectangle type objects
rectangle_list {start, end} = {{1,2}, {3,5}, {4,7} {9,12}}
We can observe that the 3rd object has its start coordinate 4 < the previous rectangle's end coordinate which is 5. So in sorting this list, I would have to remove the 2nd or the 3rd object so that they don't overlap.
I'm not sure if there is a type for this kind of problem so I didn't know how else to name it. I'm interested in an algorithm that can be applied on a list of such objects and would sort them out accordingly.
I've tagged this with c++ because the code I'm writing is c++ but any language would do for the algorithm.
You are essentially solving the following problem. Suppose we have n intervals {[x_1,y_1),[x_2,y_2),...,[x_n,y_n)} with x_1<=x_2<=...<=x_n. We want to find a maximal subset of these intervals such that there are no overlaps between any intervals in the subset.
The naive solution is dynamic programming. It guarantees to find the best solution. Let f(i), 0<=i<=n, be the size of the maximal subset up to interval [x_i,y_i). We have equation (this is latex):
f(i)=\max_{0<=j<i}{f(j)+d(i,j)}
where d(i,j)=1 if and only if [x_i,y_i) and [x_j,y_j) have no overlaps; otherwise d(i,j) takes zero. You can iteratively compute f(i), starting from f(0)=0. f(n) gives the size of the maximal subset. To get the actual subset, you need to keep a separate array s(i)=\argmax_{0<=j<i}{f(j)+d(i,j)}. You then need to backtrack to get the 'path'.
This is an O(n^2) algorithm: you need to compute each f(i) and for each f(i) you need i number of tests. I think there should be a O(nlogn) algorithm, but I am not so sure.
EDIT: an implementation in Lua:
function find_max(list)
local ret, f, b = {}, {}, {}
f[0], b[0] = 0, 0
table.sort(list, function(a,b) return a[1]<b[1] end)
-- dynamic programming
for i, x in ipairs(list) do
local max, max_j = 0, -1
x = list[i]
for j = 0, i - 1 do
local e = j > 0 and list[j][2] or 0
local score = e <= x[1] and 1 or 0
if f[j] + score > max then
max, max_j = f[j] + score, j
end
end
f[i], b[i] = max, max_j
end
-- backtrack
local max, max_i = 0, -1
for i = 1, #list do
if f[i] > max then -- don't use >= here
max, max_i = f[i], i
end
end
local i, ret = max_i, {}
while true do
table.insert(ret, list[i])
i = b[i]
if i == 0 then break end
end
return ret
end
local l = find_max({{1,2}, {4,7}, {3,5}, {8,11}, {9,12}})
for _, x in ipairs(l) do
print(x[1], x[2])
end
The name of this problem is bin packing, it is usually considered as a hard problem but can be computed reasonably well for small number of bins.
Here is a video explaining common approaches to this problem
EDIT : By hard problem, I mean that some kind of brute force has to be employed. You will have to evaluate a lot of solutions and reject most of them, so usually you need some kind of evaluation mechanism. You need to be able to compare solution, such as "This solution packs 4 rectangles with area of 15" is better than "This solution packs 3 rectangles with area of 16".
I can't think of a shortcut, so you may have to enumerate the power set in descending order of size and stop on the first match.
The straightforward way to do this is to enumerate combinations of decreasing size. You could do something like this in C++11:
template <typename I>
std::set<Span> find_largest_non_overlapping_subset(I start, I finish) {
std::set<Span> result;
for (size_t n = std::distance(start, finish); n-- && result.empty();) {
enumerate_combinations(start, finish, n, [&](I begin, I end) {
if (!has_overlaps(begin, end)) {
result.insert(begin, end);
return false;
}
return true;
});
}
return result;
}
The implementation of enumerate_combination is left as an exercise. I assume you already have has_overlap.