Subset Problem -- Any Materials? - c++

Yes this is a homework/lab assignment.
I am interesting in coming up with/finding an algorithm (I can comprehend :P) for using "backtracking" to solve the subset sum problem.
Anyone have some helpful resources? I've spent the last hour or so Googling with not much like finding something I think I could actually use. xD
Thanks SO!

Put the data in a vector.
Then write a routine that has 3 arguments: the vector, an index, and a sum.
Call this routine with the following arguments: the vector, 0, 0.
The routine should do the following tasks:
check if we reached the end of the vector (index==size). If this is the case, we can return immediately.
call itself with arguments: the vector, index+1, sum+vector[index]
(in this case we add the element at the index to the sum and continue with the vector)
call itself with arguments: the vector, index+1, sum
(in this case we don't add the element at the index to the sum, but still continue)
I deliberately left out 2 parts in this algorithm:
first, you should check the sum at some point. If it is zero, then you found a correct subset
second, you should also pass knowledge about which elements you used, so if the sum is zero, you can print out the subset. Consider using an STL::set for this.
Alternatively, you can use the return value of the function to determine whether a correct subset has already been found or not.
The complexity of the algorithm is O(2^N) so it will be very slow for big sets.
Have fun.

Related

Find the number of ways to partition the array

I want number of ways to divide an array of possitive integers such that maximum value of left part of array is greater than or equal to the maximum value of right part of the array.
For example,
6 4 1 2 1 can be divide into:
[[6,4,1,2,1]] [[6][4,1,2,1]] [[6,4][1,2,1]] [[6,4,1][2,1]] [[6,4,1,2][1]] [[6][4,1][2,1]] [[6][4][1,2,1]] [[6][4][1,2][1]] [[6][4,1,2][1]] [[6,4,1][2][1]] [[6][4,1][2][1]] [[6,4][1,2][1]]
which are total 12 ways of partitioning.
I tried a recursive approach but it fails beacause of termination due to exceed of time limit. Also this approach is not giving correct output always.
In this another approach, I took the array ,sort it in decreasing order and then for each element I checked weather it lies on right of the original array, and if does then added it's partitions to it's previous numbers too.
I want an approach to solve this, any implementation or pseudocode or just an idea to do this would be appreciable.
I designed a simple recursive algorithm. I will try to explain on your example;
First, check if [6] is a possible/valid part of a partition.
It is a valid partition because maximum element of ([6]) is bigger than remaining part's ([4,1,2,1]) maximum value.
Since it is a valid partition, we can use recursive part of the algorithm.
concatenate([6],algorithm([4,1,2,1]))
now the partitions
[[6][4,1,2,1]], [[6][4,1][2,1]], [[6][4,1][2,1]] [[6][4][1,2,1]] [[6][4][1,2][1]] [[6][4,1,2][1]]
are in our current solution set.
Check if [6,4] is a possible/valid part of a partition.
Continue like this until reaching [6,4,1,2,1].

Time Complexity of fibonacci series in Bottom Up approach(DP)

Algorithm in Bottom up approach
a[0]=0,a[1]=1
integer fibo(n)
if a[n]== null
a[n] = fibo(n-1) + fibo(n-2)
return a[n]
How this algorithm has the time limit of O(N)
For 5 it calls 8 times.
pass of fibnacci series in Bottom Up approach
fibo(5) calling 8 times to go top to down and also calling 8 times to return top from bottom. so total call is 8+8=16 of my view. So how the time complexity is O(N) it's unclear to me.
I found many similar questions answered here but all of those isn't related with my
interest.
Some of these are:
Time Complexity of Fibonacci Series
Time Complexity of Fibonacci Algorithm
Anyone help would be appreciated.Thanks
There are a couple of quick things to mention before answering your question about the time complexity. The reason for this is that the time complexity at least partially depends on these answers.
First, there seems to be a bug in your program as you have an array 'a' for the base conditions (Fibbinacci numbers 0 and 1) and some array 'm' which is set in the fibo function, but never used again. More importantly, when you reach n=1 or n=0, you return the value of m[n] which is entirely unknown. So, I'm going to assume the algorithm is rewritten as follows:
a[0]=0,a[1]=1
integer fibo(n)
if a[n]== null
a[n] = fibo(n-1) + fibo(n-2)
return a[n]
Okay, second problem. Let's assume that that a is always defined as at least n+1 integers. There needs to be enough room for the incoming data. This is important because c++ will let you overwrite values at the n+1th index. It's out of bounds and wrong, but c++ doesn't give those sorts of protections. It is up to you as the programmer to verify boundary conditions like that. (I'm assuming c++ because this is tagged with c++. The code looks more like python, which has its wrap-around indices which are problematic on their own.)
Third, let's assume that you don't start with a new array 'a' for each run of the algorithm. This is important because if a stores already-calculated values then you will save time on calculation by not having to re-evaluate those values. That time savings is a great thing even if it won't affect how I calculate time complexity.
Great. Let's get started with your question. Let's use the image below to answer it. When you start the algorithm at n you are going to make two recursive calls for fibo(n-1) and fibo(n-2) BUT they do not happen simultaneously. Instead the first call for fibo(n-1) takes place and must be 100% complete before the second call for fibo(n-2) begins. That call is represented by the green line from n-1 on the nth line to the n-1th line.
Now, those green lines apply to each recursion down the line until you reach the fibo(1) call. That call terminates early because a[n] is NOT null. Finally the second call for fibo(0) is executed and it also terminates early because a[n] is not null. Okay, so much for the first set of recursive calls.
As each recursive call returns, the second call (represented by the orange broken line) is made, but a[n] is no longer null, so that call terminates early and the call returns up to the next layer.
So, let's count the number of calls. From n to 1 is n-1 recursive calls. At the end there is one additional call to fibo(0) so that is n recursive calls. Then on the way up there are n-2 additional calls which terminate early. So, altogether we have 2n-2 calls which is O(n).
Of course, if you call fibo(k) and then fibo(k+x) you will only need to do the first 2x calls because everything from fibo(k) down is already known. It is a considerable savings after the initial investment. Any questions?
Regarding O(2n)=O(n), that is a good follow up. Big-O complexity rules say that we are interested in the order-of-magnitude when you compare efficiency. So, suppose that you were looking at a n=1000. O(n)=1000, O(2n)=2000, but O(n2)=1,000,000. O(n) is more or less the same as O(2n), but if you compare them with O(n2), that is a huge difference. Similarly, if you have O(n+1)=1001 that isn't much different from O(n). So, in general we say that the leading term, the most important value in the equation is what is important. We aren't really interested in extra terms. We aren't really interested in specific coefficients because they don't really affect the outcome.
If you still have questions, see this site for some additional information.
https://justin.abrah.ms/computer-science/big-o-notation-explained.html

Comparison of two common comparison algorithms and their Big O help please

Today my professor gave us 2 take home questions as practice for upcoming array unit in C and I am wondering what exactly the sorting algorithm these 2 problems resemble and what their Big O is. Now, I am not coming here just expecting answers and I have ALREADY solved them, but I am not confident in my answers so I will post them udner each question and if I am wrong, please correct me and explain my error in thinking.
Question 1:
If we decide to go through an array's(box) element(folders) one at a time. Starting at the first element and comparing it with the next. Then if they are the same the comparison ends, however if both are not equal then it moves on to comparing the next two ELEMENTS [2] and [3]. This process is repeated and will stop once last two elements are compared and note that the array IS already sorted by last name and we are looking for same first name! Example: [ Harper Steven, Hawking John, Ingleton Steven]
My believed answer:
I beleive it is O(n) because it's just going over the elements of an array comparing array[0] to array[1] and then array[2] to array[3] ect ect. This process is linear and continues until the last two are compared. Definitely not logn because we aren't multiplying or diving by 2.
Final Question:
Suppose we have a box of folders each containing info on one person. If we were to want to look for people with same first name, we could first start by placing a sticker on the first folder in the box and then going through the folders after it in an orderly fashion until we find person with same name. If we find a folder with same name, we move that folder next to the folder with a sticker. Once we find ONE case where two people have same name, we stop and go to sleep because we're lazy. If the first search fails however, we simply remove sticker and place it on next folder and then continue as we did earlier. We repeat this process until sticker is on last folder in a scenario where we have no two people with same name.
This array is NOT sorted and compares the first folder with sticker folder[0] with the next i folder[i] elements.
My answer:
I feel like this can't be O(n), but maybe O(n^2) where it kinda feels like we have an array and then we keep repeating the process where n is proportional to the square of the input(folders). I could be wrong here through >.>
You're right on both questions… but it would help to explain things a bit more rigorously. I don't know what the standards of your class are; you probably don't need an actual proof, but showing more detailed reasoning than "we aren't multiplying or dividing by two" never hurts. So…
In the first question, there's clearly nothing happening here but comparisons, so that's what we have to count.
And the worst case is obviously that you have to go through the whole array.
So, in that case, you have to compare a[0] == a[1], then a[1] == a[2], …, a[N-1] == a[N]. For each of N-1 elements, there's 1 comparison. That's N-1 steps, which is obviously O(N).
The fact that the array is sorted turns out to be irrelevant here. (Of course since they're not sorted by your search key—that is, they're sorted by last name, but you're comparing by first name—that was already pretty obvious.)
In the second question, there are two things happening here: comparisons, and then moves.
For the comparisons, the worst case is that you have to do all N searches because there are no matches. As you say, we start with a[0] vs. a[1], …, a[N]; then a[1] vs. a[2], …, a[N], etc. So, N-1 comparisons, then N-2, and so on down to 0. So the total number of comparisons is sum(0…N-1), which is N*(N-1)/2, or N^2/2 - N/2, which is O(N^2).
For the moves, the worst case is that you find a match between a[0] and a[N]. In that case, you have to swap a[N] with a[N-1], then a[N-1] with a[N-2], and so on until you've swapped a[2] with a[1]. So, that's N-1 swaps, which is O(N), which you can ignore because you've already got an O(N^2) term.
As a side note, I'm not sure from your description whether you're talking about an array from a[0…N], or an array of length N, so a[0…N-1], so there could be an off-by-one error in both of the above. But it should be pretty easy to prove to yourself that it doesn't make a difference.
Scenario 2, a method of finding two matching items of arbitrary value, is indeed “quadratic”. Each pass looking for a match of one candidate against all the rest of the elements is O(n). But you repeat that n times. The value of n drops as you go so a detailed number of comparisons would be closer to n+(n-1)+(n-2)+ … 1 which is (n+1)×(n/2) or ½(n²+n) but all we care about is the overall shape of the curve so don't worry about the lower order terms or the coefficients. It's O(n²).

Find that unique element from the 10^5 array size [duplicate]

This question already has answers here:
How to find the only number in an array that doesn't occur twice [duplicate]
(5 answers)
Closed 7 years ago.
What would be the best algorithm for finding a number that occurs only once in a list which has all other numbers occurring exactly twice.
So, in the list of integers (lets take it as an array) each integer repeats exactly twice, except one. To find that one, what is the best algorithm.
The fastest (O(n)) and most memory efficient (O(1)) way is with the XOR operation.
In C:
int arr[] = {3, 2, 5, 2, 1, 5, 3};
int num = 0, i;
for (i=0; i < 7; i++)
num ^= arr[i];
printf("%i\n", num);
This prints "1", which is the only one that occurs once.
This works because the first time you hit a number it marks the num variable with itself, and the second time it unmarks num with itself (more or less). The only one that remains unmarked is your non-duplicate.
By the way, you can expand on this idea to very quickly find two unique numbers among a list of duplicates.
Let's call the unique numbers a and b. First take the XOR of everything, as Kyle suggested. What we get is a^b. We know a^b != 0, since a != b. Choose any 1 bit of a^b, and use that as a mask -- in more detail: choose x as a power of 2 so that x & (a^b) is nonzero.
Now split the list into two sublists -- one sublist contains all numbers y with y&x == 0, and the rest go in the other sublist. By the way we chose x, we know that a and b are in different buckets. We also know that each pair of duplicates is still in the same bucket. So we can now apply ye olde "XOR-em-all" trick to each bucket independently, and discover what a and b are completely.
Bam.
O(N) time, O(N) memory
HT= Hash Table
HT.clear()
go over the list in order
for each item you see
if(HT.Contains(item)) -> HT.Remove(item)
else
ht.add(item)
at the end, the item in the HT is the item you are looking for.
Note (credit #Jared Updike): This system will find all Odd instances of items.
comment: I don't see how can people vote up solutions that give you NLogN performance. in which universe is that "better" ?
I am even more shocked you marked the accepted answer s NLogN solution...
I do agree however that if memory is required to be constant, then NLogN would be (so far) the best solution.
Kyle's solution would obviously not catch situations were the data set does not follow the rules. If all numbers were in pairs the algorithm would give a result of zero, the exact same value as if zero would be the only value with single occurance.
If there were multiple single occurance values or triples, the result would be errouness as well.
Testing the data set might well end up with a more costly algorithm, either in memory or time.
Csmba's solution does show some errouness data (no or more then one single occurence value), but not other (quadrouples). Regarding his solution, depending on the implementation of HT, either memory and/or time is more then O(n).
If we cannot be sure about the correctness of the input set, sorting and counting or using a hashtable counting occurances with the integer itself being the hash key would both be feasible.
I would say that using a sorting algorithm and then going through the sorted list to find the number is a good way to do it.
And now the problem is finding "the best" sorting algorithm. There are a lot of sorting algorithms, each of them with its strong and weak points, so this is quite a complicated question. The Wikipedia entry seems like a nice source of info on that.
Implementation in Ruby:
a = [1,2,3,4,123,1,2,.........]
t = a.length-1
for i in 0..t
s = a.index(a[i])+1
b = a[s..t]
w = b.include?a[i]
if w == false
puts a[i]
end
end
You need to specify what you mean by "best" - to some, speed is all that matters and would qualify an answer as "best" - for others, they might forgive a few hundred milliseconds if the solution was more readable.
"Best" is subjective unless you are more specific.
That said:
Iterate through the numbers, for each number search the list for that number and when you reach the number that returns only a 1 for the number of search results, you are done.
Seems like the best you could do is to iterate through the list, for every item add it to a list of "seen" items or else remove it from the "seen" if it's already there, and at the end your list of "seen" items will include the singular element. This is O(n) in regards to time and n in regards to space (in the worst case, it will be much better if the list is sorted).
The fact that they're integers doesn't really factor in, since there's nothing special you can do with adding them up... is there?
Question
I don't understand why the selected answer is "best" by any standard. O(N*lgN) > O(N), and it changes the list (or else creates a copy of it, which is still more expensive in space and time). Am I missing something?
Depends on how large/small/diverse the numbers are though. A radix sort might be applicable which would reduce the sorting time of the O(N log N) solution by a large degree.
The sorting method and the XOR method have the same time complexity. The XOR method is only O(n) if you assume that bitwise XOR of two strings is a constant time operation. This is equivalent to saying that the size of the integers in the array is bounded by a constant. In that case you can use Radix sort to sort the array in O(n).
If the numbers are not bounded, then bitwise XOR takes time O(k) where k is the length of the bit string, and the XOR method takes O(nk). Now again Radix sort will sort the array in time O(nk).
You could simply put the elements in the set into a hash until you find a collision. In ruby, this is a one-liner.
def find_dupe(array)
h={}
array.detect { |e| h[e]||(h[e]=true; false) }
end
So, find_dupe([1,2,3,4,5,1]) would return 1.
This is actually a common "trick" interview question though. It is normally about a list of consecutive integers with one duplicate. In this case the interviewer is often looking for you to use the Gaussian sum of n-integers trick e.g. n*(n+1)/2 subtracted from the actual sum. The textbook answer is something like this.
def find_dupe_for_consecutive_integers(array)
n=array.size-1 # subtract one from array.size because of the dupe
array.sum - n*(n+1)/2
end

Find top log(n) or top sqt(n) values in an array of integers

Do you understand what this question means
Find top log(n) or top sqt(n) values in an array of integers in less than linear time.
If you don't, here is the question http://www.careercup.com/question?id=9337669.
Could you please help me in understanding this question and then may be get it solved. (Although once i understand i might get it solved too)
Thanks for your time.
For non sorted array the complexity is linear, but it can be possible to improve the performance by observing that log(n) and sqrt(n) are both monotonic growing function, hence max(log(n),...) is also log(max(n,...)) and same for sqrt.
So just find max(n) (linearly) and calculate log and sqrt.
Assuming the array is not sorted, This problem is Omega(n), since you need to read all elements [finding max is Omega(n) problem in non-sorted array, and this problem is not easier then finding max]. So, there is no sublinear solution for it.
There is O(n) [linear] solution, using selection algorithm
1. find the log(n) biggest element. //or sqrt(n) biggest element...
2. scan the array and return all elements bigger/equal it.
(*)This pseudo code is not correct if the array contain dupes, but trimming the dupes in a second step is fairly easy.