how to solve 0-1 package problem by iteration? [duplicate] - c++

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Design patterns for converting recursive algorithms to iterative ones
I can only figure out a recursive way:
how to transform it into iteration?
for simplicity, my recursive way only find the result value.
#include<iostream>
//0-1 bag
using namespace std;
int weight[5]={50,30,45,25,5};
int value[5]={200,180,225,200,50};
int valueAfterTake(int objectNO, int bagSpaceLeft)
{
int take,notTake;
if(objectNO < 0)
{
return 0;
}
else
{
take = value[objectNO] + valueAfterTake(objectNO - 1,bagSpaceLeft - weight[objectNO]);
notTake = valueAfterTake(objectNO - 1, bagSpaceLeft);
}
if(weight[objectNO] > bagSpaceLeft)
{
return notTake;
}
if(take > notTake)
{
return take;
}
else
{
return notTake;
}
}
int main()
{
cout<<valueAfterTake(4,100)<<endl;
return 0;
}

Given what you really want, I think this is a duplicate question of Design patterns for converting recursive algorithms to iterative ones

From the algorithm in 0-1 knapsack problem can put everything in a table of i and w.
Make a two diminutional table filled with NO_VALUE constants(-1 or something like that).
Then when you need to get the value for m[i, w] you find what indexes from the table you need, check if their computed(by comparing to NO_VALUE), and computing them if their not.
Here you will gain much less code execution in tradeoff for space because you will never compute the same value twice.
edit:
In addition, from there you can continue to find patterns, like you're always using one row, or one diagonal and such and cut out everything you don't need in the table.

Related

Chain of doughnut - codechef

I am trying to solve this problem since last two days. I am not getting the correct results.
The solutions which are accepted are sorting the number of chains first. I didn't understand why they do it.
Just the first task is correct. For Second task the answer is wrong and for third time limit exceeds.
Here is my code:
#include<iostream>
using namespace std;
int main() {
int t;
cin>>t;
while(t--) {
long n=0;
int f=0,c=0,cuts=0;
cin>>n>>c;
int toJoint=c-1;
int A[c];
for (int i =0;i<c;i++)
cin>>A[i];
if (c>2){
for (int i =0;i<c;i++) {
if (A[i]==1) {
f++;
cuts++;
toJoint-=2;
if(toJoint<=1) break;
}
}
if (toJoint>0){
if (f==0) cout<<toJoint<<endl;
else cout<<(cuts+toJoint)<<endl;
}
else cout<<cuts<<endl;
}
else if (c==1) cout<<0<<endl;
else cout<<++cuts<<endl;
}
return 0;
}
You have the following operations, each of which can be used to link two chains together:
Cut a chain (>=3) in the middle (0 less chains)
Cut a chain (>=2) at the end (1 less chain)
Cut a single donut (2 less chains)
An optimal solution never needs to use (1), thus the objective is to make sure that as many operations as possible are (3)s, the rest being (2)s. The obvious best way to do this is to repeatedly cut a donut from the end of the smallest chain and use it to stick together the biggest two chains. This is the reason for sorting the chains. Even so, it might be faster to make the lengths into a heap, and only extract the minimum element as many times as we need to.
Now to the question: your algorithm only uses operation (3) on single donuts, but doesn't try to make more single donuts by cutting donuts from the end of the smallest chain. And so as Jarod42 points out, with
counterexample, it isn't optimal.
I should also point out that your use of VLAs
int A[c];
is an non-standard extension. To be strict, you should use std::vector instead.
For completeness, here's an example:
std::sort(A.begin(), A.end());
int smallest_index = 0;
int cuts = 0;
while (M > 1)
{
int smallest = A[smallest_index];
if (smallest <= M - 2)
{
// Obliterate the smallest chain, using all its donuts to link other chains
smallest_index++;
M -= smallest + 1;
cuts += smallest;
}
else
{
// Cut M - 2 donuts from the smallest chain - linking the other chains into one.
// Now there are two chains, requiring one more cut to link
cuts += M - 1;
break;
}
}
return cuts;
(disclaimer: only tested on the sample data, may fail in corner-cases or not work at all.)

Need suggestion to improve speed for word break (dynamic programming)

The problem is: Given a string s and a dictionary of words dict, determine if s can be segmented into a space-separated sequence of one or more dictionary words.
For example, given
s = "hithere",
dict = ["hi", "there"].
Return true because "hithere" can be segmented as "leet code".
My implementation is as below. This code is ok for normal cases. However, it suffers a lot for input like:
s = "aaaaaaaaaaaaaaaaaaaaaaab", dict = {"aa", "aaaaaa", "aaaaaaaa"}.
I want to memorize the processed substrings, however, I cannot done it right. Any suggestion on how to improve? Thanks a lot!
class Solution {
public:
bool wordBreak(string s, unordered_set<string>& wordDict) {
int len = s.size();
if(len<1) return true;
for(int i(0); i<len; i++) {
string tmp = s.substr(0, i+1);
if((wordDict.find(tmp)!=wordDict.end())
&& (wordBreak(s.substr(i+1), wordDict)) )
return true;
}
return false;
}
};
It's logically a two-step process. Find all dictionary words within the input, consider the found positions (begin/end pairs), and then see if those words cover the whole input.
So you'd get for your example
aa: {0,2}, {1,3}, {2,4}, ... {20,22}
aaaaaa: {0,6}, {1,7}, ... {16,22}
aaaaaaaa: {0,8}, {1,9} ... {14,22}
This is a graph, with nodes 0-23 and a bunch of edges. But node 23 b is entirely unreachable - no incoming edge. This is now a simple graph theory problem
Finding all places where dictionary words occur is pretty easy, if your dictionary is organized as a trie. But even an std::map is usable, thanks to its equal_range method. You have what appears to be an O(N*N) nested loop for begin and end positions, with O(log N) lookup of each word. But you can quickly determine if s.substr(begin,end) is a still a viable prefix, and what dictionary words remain with that prefix.
Also note that you can build the graph lazily. Staring at begin=0 you find edges {0,2}, {0,6} and {0,8}. (And no others). You can now search nodes 2, 6 and 8. You even have a good algorithm - A* - that suggests you try node 8 first (reachable in just 1 edge). Thus, you'll find nodes {8,10}, {8,14} and {8,16} etc. As you see, you'll never need to build the part of the graph that contains {1,3} as it's simply unreachable.
Using graph theory, it's easy to see why your brute-force method breaks down. You arrive at node 8 (aaaaaaaa.aaaaaaaaaaaaaab) repeatedly, and each time search the subgraph from there on.
A further optimization is to run bidirectional A*. This would give you a very fast solution. At the second half of the first step, you look for edges leading to 23, b. As none exist, you immediately know that node {23} is isolated.
In your code, you are not using dynamic programming because you are not remembering the subproblems that you have already solved.
You can enable this remembering, for example, by storing the results based on the starting position of the string s within the original string, or even based on its length (because anyway the strings you are working with are suffixes of the original string, and therefore its length uniquely identifies it). Then, in the beginning of your wordBreak function, just check whether such length has already been processed and, if it has, do not rerun the computations, just return the stored value. Otherwise, run computations and store the result.
Note also that your approach with unordered_set will not allow you to obtain the fastest solution. The fastest solution that I can think of is O(N^2) by storing all the words in a trie (not in a map!) and following this trie as you walk along the given string. This achieves O(1) per loop iteration not counting the recursion call.
Thanks for all the comments. I changed my previous solution to the implementation below. At this point, I didn't explore to optimize on the dictionary, but those insights are very valuable and are very much appreciated.
For the current implementation, do you think it can be further improved? Thanks!
class Solution {
public:
bool wordBreak(string s, unordered_set<string>& wordDict) {
int len = s.size();
if(len<1) return true;
if(wordDict.size()==0) return false;
vector<bool> dq (len+1,false);
dq[0] = true;
for(int i(0); i<len; i++) {// start point
if(dq[i]) {
for(int j(1); j<=len-i; j++) {// length of substring, 1:len
if(!dq[i+j]) {
auto pos = wordDict.find(s.substr(i, j));
dq[i+j] = dq[i+j] || (pos!=wordDict.end());
}
}
}
if(dq[len]) return true;
}
return false;
}
};
Try the following:
class Solution {
public:
bool wordBreak(string s, unordered_set<string>& wordDict)
{
for (auto w : wordDict)
{
auto pos = s.find(w);
if (pos != string::npos)
{
if (wordBreak(s.substr(0, pos), wordDict) &&
wordBreak(s.substr(pos + w.size()), wordDict))
return true;
}
}
return false;
}
};
Essentially one you find a match remove the matching part from the input string and so continue testing on a smaller input.

Time complexity issues with multimap

I created a program that finds the median of a list of numbers. The list of numbers is dynamic in that numbers can be removed and inserted (duplicate numbers can be entered) and during this time, the new median is re-evaluated and printed out.
I created this program using a multimap because
1) the benefit of it being already being sorted,
2) easy insertion, deletion, searching (since multimap implements binary search)
3) duplicate entries are allowed.
The constraints for the number of entries + deletions (represented as N) are: 0 < N <= 100,000.
The program I wrote works and prints out the correct median, but it isn't fast enough. I know that the unsorted_multimap is faster than multimap, but then the problem with unsorted_multimap is that I would have to sort it. I have to sort it because to find the median you need to have a sorted list. So my question is, would it be practical to use an unsorted_multimap and then quick sort the entries, or would that just be ridiculous? Would it be faster to just use a vector, quicksort the vector, and use a binary search? Or maybe I am forgetting some fabulous solution out there that I haven't even thought of.
Though I'm not new to C++, I will admit, that my skills with time-complexity are somewhat medicore.
The more I look at my own question, the more I'm beginning to think that just using a vector with quicksort and binary search would be better since the data structures basically already implement vectors.
the more I look at my own question, the more I'm beginning to think that just using vector with quicksort and binary search would be better since the data structures basically already implement vectors.
If you have only few updates - use unsorted std::vector + std::nth_element algorithm which is O(N). You don't need full sorting which is O(N*ln(N)).
live demo of nth_element:
#include <algorithm>
#include <iterator>
#include <iostream>
#include <ostream>
#include <vector>
using namespace std;
template<typename RandomAccessIterator>
RandomAccessIterator median(RandomAccessIterator first,RandomAccessIterator last)
{
RandomAccessIterator m = first + distance(first,last)/2; // handle even middle if needed
nth_element(first,m,last);
return m;
}
int main()
{
vector<int> values = {5,1,2,4,3};
cout << *median(begin(values),end(values)) << endl;
}
Output is:
3
If you have many updates and only removing from middle - use two heaps as comocomocomocomo suggests. If you would use fibonacci_heap - then you would also get O(N) removing from arbitary position (if don't have handle to it).
If you have many updates and need O(ln(N)) removing from arbitary places - then use two multisets as ipc suggests.
If your purpose is to keep track of the median on the fly, as elements are inserted/removed, you should use a min-heap and a max-heap. Each one would contain one half of the elements... There was a related question a couple of days ago: How to implement a Median-heap
Though, if you need to search for specific values in order to remove elements, you still need some kind of map.
You said that it is slow. Are you iterating from the beginning of the map to the (N/2)'th element every time you need the median? You don't need to. You can keep track of the median by maintaining an iterator pointing to it at all times and a counter of the number of elements less than that one. Every time you insert/remove, compare the new/old element with the median and update both iterator and counter.
Another way of seeing it is as two multimaps containing half the elements each. One holds the elements less than the median (or equal) and the other holds those greater. The heaps do this more efficiently, but they don't support searches.
If you only need the median a few times you can use the "select" algorithm. It is described in Sedgewick's book. It takes O(n) time on average. It is similar to quick sort but it does not sort completely. It just partitions the array with random pivots until, eventually, it gets to "select" on one side the smaller m elements (m=(n+1)/2). Then you search for the greatest of those m elements, and this is the median.
Here is how you could implement that in O(log N) per update:
template <typename T>
class median_set {
public:
std::multiset<T> below, above;
// O(log N)
void rebalance()
{
int diff = above.size() - below.size();
if (diff > 0) {
below.insert(*above.begin());
above.erase(above.begin());
} else if (diff < -1) {
above.insert(*below.rbegin());
below.erase(below.find(*below.rbegin()));
}
}
public:
// O(1)
bool empty() const { return below.empty() && above.empty(); }
// O(1)
T const& median() const
{
assert(!empty());
return *below.rbegin();
}
// O(log N)
void insert(T const& value)
{
if (!empty() && value > median())
above.insert(value);
else
below.insert(value);
rebalance();
}
// O(log N)
void erase(T const& value)
{
if (value > median())
above.erase(above.find(value));
else
below.erase(below.find(value));
rebalance();
}
};
(Work in action with tests)
The idea is the following:
Keep track of the values above and below the median in two sets
If a new value is added, add it to the corresponding set. Always ensure that the set below has exactly 0 or 1 more then the other
If a value is removed, remove it from the set and make sure that the condition still holds.
You can't use priority_queues because they won't let you remove one item.
Can any one help me what is Space and Time complexity of my following C# program with details.
//Passing Integer array to Find Extreme from that Integer Array
public int extreme(int[] A)
{
int N = A.Length;
if (N == 0)
{
return -1;
}
else
{
int average = CalculateAverage(A);
return FindExtremes(A, average);
}
}
// Calaculate Average of integerArray
private int CalculateAverage(int[] integerArray)
{
int sum = 0;
foreach (int value in integerArray)
{
sum += value;
}
return Convert.ToInt32(sum / integerArray.Length);
}
//Find Extreme from that Integer Array
private int FindExtremes(int[] integerArray, int average) {
int Index = -1; int ExtremeElement = integerArray[0];
for (int i = 0; i < integerArray.Length; i++)
{
int absolute = Math.Abs(integerArray[i] - average);
if (absolute > ExtremeElement)
{
ExtremeElement = integerArray[i];
Index = i;
}
}
return Index;
}
You are almost certainly better off using a vector. Possibly maintaining an auxiliary vector of indexes to be removed between median calculations so you can delete them in batches. New additions can also be put into an auxiliary vector, sorted, then merged in.

Optimizing C code [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose we have an array of numbers say {1,2,3} and we want to equalize the numbers in the least number of turns possible; where the definition of a "turn" is as follows:
In a turn, you need to fix the value of one of the elements as is, and increment every other number by 1.
Considering the eg. already mentioned - A={1,2,3} , the goal is to equalize them.What I've already done is formulate the logic i.e The method to using a minimum number of turns is to choose the maximum number in each turn.
Iteration 1: Hold A[2]=3. Array at end of iteration => {2,3,3}
Iteration 2: Hold A[2]=3. Array at end of iteration => {3,4,3}
Iteration 3: Hold A[1]=4. Array at end of iteration => {4,4,4}
So,number of turns taken = 3
The code I've written is as follows:
#include<iostream>
#include<stdio.h>
int findMax(int *a,int n)
{
int i,max;
max=1;
for(i=2;i<=n;i++)
{
if(a[i]>a[max])
{
max=i;
}
}
return max;
}
int equality(int *a,int n)
{
int i;
for(i=1;i<n;i++)
{
if(a[i]!=a[i+1]) return 0;
}
return 1;
}
int main()
{
int a[100],i,count,t,posn_max,n,ip=0;
scanf("%d",&t);
while(ip<t)
{
count=0;
scanf("%d",&n);
for(i=1;i<=n;i++)
{
scanf("%d",&a[i]);
}
while(equality(a,n)==0)
{
posn_max=findMax(a,n);
for(i=1;i<=n;i++)
{
if(i!=posn_max)
{
a[i]=a[i]+1;
}
}
count++;
}
printf("%d\n",count);
ip++;
}
return 0;
}
This gives me the correct answer I need alright. But I want to optimize it further.
My Time Limit is 1.0 s . But the judge site tells me my code takes 1.01s. Can anyone help me out?
As far as I can see, I've used scanf/printf statements as compared to cout/cin, in a bid to optimize the input/output part. But what else should I be doing better?
In your algorithm, you are increasing all numbers in the expect for the maximum.
If you do it the other way around, decreasing the maximum and leaving the rest of the numbers, the result should be the same (but with much less memory/array operations)!
To make it even faster, you can get rid of the memory oeprations completely (as suggested by Ivaylo Strandjev also): Find the minimum number and by the idea above (of decreasing numbers instead of increasing) you know how much decreases you require to decrease all numbers to this minimum number. So, after finding the minimum you need one loop to calculate the number of turns.
Take your example of {1,2,3}
The minimum is 1
Number of turns: (1-1)+(2-1)+(3-1) = 0 + 1 + 2 = 3
If you are really clever, it is possible to calculate the number of turns directly when inputting the numbers and keeping track of the current minimum number... Try it! ;)
You only care about the count not about the actual actions you need to perform. So instead of performing the moves one by one try to find a way to count the number of moves without performing them. The code you wrote will not pass in the time limit no matter how well you optimize it. The maximum element observation you've made will help you along the way.
Besides the other comments, if I get this right and your code is just a little bit too slow, here are two optimizations which should help you.
First, you can combine equality() and findMax() and only scan once through the array instead of your current worst case (twice).
Second, you can split the "increase" loop into two parts (below and above the max position). This will remove the effort to check the position in the loop.
1) Try unrolling the loops
2) Can you use SIMD instruction? That would really speed this code up
I would printf in a separate thread, since it's an I/O operation and is much slower than your calculations.
It also does not demand complicated management e.g Producer-Consumer queue, since you only pass the ordered numbers from 0 to last count.
Here's the pseudo-code:
volatile int m_count = 0;
volatile bool isExit = false;
void ParallelPrint()
{
int currCount = 0;
while (!isExit)
{
while (currCount < m_count)
{
currCount++;
printf("%d\n", currCount);
}
Sleep(0); // just content switch
}
}
Open the thread before the scanf("%d",&t); (I guess this initialization time is not counted), and close the thread by isExit = true; before the return from your Main().

Test that check that method return a range of numbers

I wrote method that return random number between two given numbers. Here it's header:
int NumRange(int low,int high);
I want to check that method really return all the range between those two numbers,
so I wrote a test (here below), but in my opinion it's too complicated. Maybe there is another way to check it, or I'm the best:)
TEST_F(RandomGeneratorTest, fill_all_the_range)
{
// In set - there is no duplicates
std::set<int> results;
int i;
for (i=0;i<1000;i++)
results.insert(NumRange(0,9));
EXPECT_EQ(10, results.size());
}
Thank you, and sorry about my poor English.
EDIT:
After question, I bring here below another test that I wrote to complete the test scenario.
TEST_F(RandomGeneratorTest, only_in_the_range)
{
int i,result;
for (i=0;i<1000;i++)
{
result = NumRange(0,9);
EXPECT_TRUE((result<=9) && (result>=0));
}
}
EDIT 2:
Based on #cyborg answer (thank you), I made a histogram test (here below). But it's very complicated, and test, in my opinion, should be very simple, so my question still alive. I search simple way to check.
TEST_F(RandomGeneratorTest, fill_all_of_the_histogram_range)
{
int results[10]= {0};
int i, approxRes;
for (i=0;i<100000000;i++)
results[NumRange(0,9)]+=1;
for (i=0;i<10;i++)
{
approxRes = results[i]/10000;
EXPECT_TRUE((approxRes<=1001) && (approxRes>=999));
}
}
You could test for a uniform distribution instead of for a complete coverage. To do this, compute a histogram and check that every bin gets more or less the expected amount.
Edit:
If you want more serious tests, see this answer: https://stackoverflow.com/a/1477505/907578