I have found a code for LIS in a book, I am not quite able to work out the proof for correctness . Can some one help me out with that. All the code is doing is deleting the element next to new inserted element in the set if the new element is not the max else just inserting the new element.
set<int> s;
set<int>::iterator it;
for(int i=0;i<n;i++)
{
s.insert(arr[i]);
it=s.find(arr[i]);
it++;
if(it!=s.end())
s.erase(it);
}
cout<<s.size()<<endl;
n is the size of sequence and arr is the sequence. I dont think the following code will work if we dont have to find "strictly" increasing sequences . Can we modify the code to find increasing sequences in which equality is allowed.
EDIT: the algorithm works only when the input are distinct.
There are several solutions to LIS.
The most typical is O(N^2) algorithm using dynamic programming, where for every index i you calculate "longest increasing sequence ending at index i".
You can speed this up to O(N log N) using clever data structures or binary search.
Your code bypasses this and only calculated the length of the LIS.
Consider input "1 3 4 5 6 7 2", the contents of the set at the end will be "1 2 4 5 6 7", which is not the LIS, but the length is correct.
Proof should go using induction as follows:
After i-th iteration the j-th smallest element is the smallest possible end of increasing sequence of the length j in the first i elements of the array.
Consider input "1 3 2". After second iteration we have set "1 3", so 1 is smallest possible end of increasing sequence of length 1 and 3 is smallest possible end of increasing sequence of length 2.
After third iteration we have set "1 2", where now the 2 is smallest possible end of increasing sequence of length 2.
I hope you can do induction step by yourself :)
The proof is relatively straightforward: consider set s as a sorted list. We can prove it with a loop invariant. After each iteration of the algorithm, s[k] contains the smallest element of arr that ends an ascending subsequence of length k in the sub-array from zero to the last element of arr that we have considered so far. We can prove this by induction:
After the first iteration, this statement is true, because s will contain exactly one element, which is a trivial ascending sequence of one element.
Each iteration can change the set in one of two ways: it could expand it by one in cases when arr[i] is the largest element found so far, or replace an existing element with arr[i], which is smaller than the element that has been there before.
When an extension of the set occurs, it happens because the current element arr[i] can be appended to the current LIS. When a replacement happens at position k, the index of arr[i], it happens because arr[i] ends an ascending subsequence of length k, and is smaller than or is equal to the old s[i] that used to end the previous "best" ascending subsequence of length k.
With this invariant in hand, it's easy to see that s contains as many elements as the longest ascending subsequence of arr after the entire arr has been exhausted.
The code is a O(nlogn) solution for LIS, but you want to find the non-strictly increasing sequence, the implementation has a problem because the std::set doesn't allow duplicate element. Here is the code that works.
#include <iostream>
#include <set>
#include <algorithm>
using namespace std;
int main()
{
int arr[] = {4, 4, 5, 7, 6};
int n = 5;
multiset<int> s;
multiset<int>::iterator it;
for(int i=0;i<n;i++)
{
s.insert(arr[i]);
it = upper_bound(s.begin(), s.end(), arr[i]);
if(it!=s.end())
s.erase(it);
}
cout<<s.size()<<endl;
return 0;
}
Problem Statement:
For A(n) :a0, a1,….an-1 we need to find LIS
Find all elements in A(n) such that, ai<aj and i<j.
For example: 10, 11, 12, 9, 8, 7, 5, 6
LIS will be 10,11,12
This is O(N^2) solution based on DP.
1 Finding SubProblems
Consider D(i): LIS of (a0 to ai) that includes ai as a part of LIS.
2 Recurrence Relation
D(i) = 1 + max(D(j) for all j<i) if ai > aj
3 Base Case
D(0) = 1;
Check out link for the code:
https://innosamcodes.wordpress.com/2013/07/06/longest-increasing-subsequence/
Related
Question
I have two arrays of integers A[] and B[]. Array B[] is fixed, I need to to find the permutation of A[] which is lexiographically smaller than B[] and the permutation is nearest to B[]. Here what I mean is:
for i in (0 <= i < n)
abs(B[i]-A[i]) is minimum and A[] should be smaller than B[] lexiographically.
For Example:
A[]={1,3,5,6,7}
B[]={7,3,2,4,6}
So,possible nearest permutation of A[] to B[] is
A[]={7,3,1,6,5}
My Approach
Try all permutation of A[] and then compare that with B[]. But the time complexity would be (n! * n)
So is there any way to optimize this?
EDIT
n can be as large as 10^5
For better understanding
First, build an ordered map of the counts of the distinct elements of A.
Then, iterate forward through array indices (0 to n−1), "withdrawing" elements from this map. At each point, there are three possibilities:
If i < n-1, and it's possible to choose A[i] == B[i], do so and continue iterating forward.
Otherwise, if it's possible to choose A[i] < B[i], choose the greatest possible value for A[i] < B[i]. Then proceed by choosing the largest available values for all subsequent array indices. (At this point you no longer need to worry about maintaining A[i] <= B[i], because we're already after an index where A[i] < B[i].) Return the result.
Otherwise, we need to backtrack to the last index where it was possible to choose A[i] < B[i], then use the approach in the previous bullet-point.
Note that, despite the need for backtracking, the very worst case here is three passes: one forward pass using the logic in the first bullet-point, one backward pass in backtracking to find the last index where A[i] < B[i] was possible, and then a final forward pass using the logic in the second bullet-point.
Because of the overhead of maintaining the ordered map, this requires O(n log m) time and O(m) extra space, where n is the total number of elements of A and m is the number of distinct elements. (Since m ≤ n, we can also express this as O(n log n) time and O(n) extra space.)
Note that if there's no solution, then the backtracking step will reach all the way to i == -1. You'll probably want to raise an exception if that happens.
Edited to add (2019-02-01):
In a now-deleted answer, גלעד ברקן summarizes the goal this way:
To be lexicographically smaller, the array must have an initial optional section from left to right where A[i] = B[i] that ends with an element A[j] < B[j]. To be closest to B, we want to maximise the length of that section, and then maximise the remaining part of the array.
So, with that summary in mind, another approach is to do two separate loops, where the first loop determines the length of the initial section, and the second loop actually populates A. This is equivalent to the above approach, but may make for cleaner code. So:
Build an ordered map of the counts of the distinct elements of A.
Initialize initial_section_length := -1.
Iterate through the array indices 0 to n−1, "withdrawing" elements from this map. For each index:
If it's possible to choose an as-yet-unused element of A that's less than the current element of B, set initial_section_length equal to the current array index. (Otherwise, don't.)
If it's not possible to choose an as-yet-unused element of A that's equal to the current element of B, break out of this loop. (Otherwise, continue looping.)
If initial_section_length == -1, then there's no solution; raise an exception.
Repeat step #1: re-build the ordered map.
Iterate through the array indices from 0 to initial_section_length-1, "withdrawing" elements from the map. For each index, choose an as-yet-unused element of A that's equal to the current element of B. (The existence of such an element is ensured by the first loop.)
For array index initial_section_length, choose the greatest as-yet-unused element of A that's less than the current element of B (and "withdraw" it from the map). (The existence of such an element is ensured by the first loop.)
Iterate through the array indices from initial_section_length+1 to n−1, continuing to "withdraw" elements from the map. For each index, choose the greatest element of A that hasn't been used yet.
This approach has the same time and space complexities as the backtracking-based approach.
There are n! permutations of A[n] (less if there are repeating elements).
Use binary search over range 0..n!-1 to determine k-th lexicographic permutation of A[] (arbitrary found example) which is closest lower one to B[].
Perhaps in C++ you can exploit std::lower_bound
Based on the discussion in the comment section to your question, you seek an array made up entirely of elements of the vector A that is -- in lexicographic ordering -- closest to the vector B.
For this scenario, the algorithm becomes quite straightforward. The idea is the same as as already mentioned in the answer of #ruakh (although his answer refers to an earlier and more complicated version of your question -- that is still displayed in the OP -- and is therefore more complicated):
Sort A
Loop over B and select the element of A that is closest to B[i]. Remove that element from the list.
If no element in A is smaller-or-equal than B[i], pick the largest element.
Here is the basic implementation:
#include <string>
#include <vector>
#include <algorithm>
auto get_closest_array(std::vector<int> A, std::vector<int> const& B)
{
std::sort(std::begin(A), std::end(A), std::greater<>{});
auto select_closest_and_remove = [&](int i)
{
auto it = std::find_if(std::begin(A), std::end(A), [&](auto x) { return x<=i;});
if(it==std::end(A))
{
it = std::max_element(std::begin(A), std::end(A));
}
auto ret = *it;
A.erase(it);
return ret;
};
std::vector<int> ret(B.size());
for(int i=0;i<(int)B.size();++i)
{
ret[i] = select_closest_and_remove(B[i]);
}
return ret;
}
Applied to the problem in the OP one gets:
int main()
{
std::vector<int> A ={1,3,5,6,7};
std::vector<int> B ={7,3,2,4,6};
auto C = get_closest_array(A, B);
for(auto i : C)
{
std::cout<<i<<" ";
}
std::cout<<std::endl;
}
and it displays
7 3 1 6 5
which seems to be the desired result.
I made a simple bubble sorting program, the code works but I do not know if its correct.
What I understand about the bubble sorting algorithm is that it checks an element and the other element beside it.
#include <iostream>
#include <array>
using namespace std;
int main()
{
int a, b, c, d, e, smaller = 0,bigger = 0;
cin >> a >> b >> c >> d >> e;
int test1[5] = { a,b,c,d,e };
for (int test2 = 0; test2 != 5; ++test2)
{
for (int cntr1 = 0, cntr2 = 1; cntr2 != 5; ++cntr1,++cntr2)
{
if (test1[cntr1] > test1[cntr2]) /*if first is bigger than second*/{
bigger = test1[cntr1];
smaller = test1[cntr2];
test1[cntr1] = smaller;
test1[cntr2] = bigger;
}
}
}
for (auto test69 : test1)
{
cout << test69 << endl;
}
system("pause");
}
It is a bubblesort implementation. It just is a very basic one.
Two improvements:
the outerloop iteration may be one shorter each time since you're guaranteed that the last element of the previous iteration will be the largest.
when no swap is done during an iteration, you're finished. (which is part of the definition of bubblesort in wikipedia)
Some comments:
use better variable names (test2?)
use the size of the container or the range, don't hardcode 5.
using std::swap() to swap variables leads to simpler code.
Here is a more generic example using (random access) iterators with my suggested improvements and comments and here with the improvement proposed by Yves Daoust (iterate up to last swap) with debug-prints
The correctness of your algorithm can be explained as follows.
In the first pass (inner loop), the comparison T[i] > T[i+1] with a possible swap makes sure that the largest of T[i], T[i+1] is on the right. Repeating for all pairs from left to right makes sure that in the end T[N-1] holds the largest element. (The fact that the array is only modified by swaps ensures that no element is lost or duplicated.)
In the second pass, by the same reasoning, the largest of the N-1 first elements goes to T[N-2], and it stays there because T[N-1] is larger.
More generally, in the Kth pass, the largest of the N-K+1 first element goes to T[N-K], stays there, and the next elements are left unchanged (because they are already increasing).
Thus, after N passes, all elements are in place.
This hints a simple optimization: all elements following the last swap in a pass are in place (otherwise the swap wouldn't be the last). So you can record the position of the last swap and perform the next pass up to that location only.
Though this change doesn't seem to improve a lot, it can reduce the number of passes. Indeed by this procedure, the number of passes equals the largest displacement, i.e. the number of steps an element has to take to get to its proper place (elements too much on the right only move one position at a time).
In some configurations, this number can be small. For instance, sorting an already sorted array takes a single pass, and sorting an array with all elements swapped in pairs takes two. This is an improvement from O(N²) to O(N) !
Yes. Your code works just like Bubble Sort.
Input: 3 5 1 8 2
Output after each iteration:
3 1 5 2 8
1 3 2 5 8
1 2 3 5 8
1 2 3 5 8
1 2 3 5 8
1 2 3 5 8
Actually, in the inner loop, we don't need to go till the end of the array from the second iteration onwards because the heaviest element of the previous iteration is already at the last. But that doesn't better the time complexity much. So, you are good to go..
Small Informal Proof:
The idea behind your sorting algorithm is that you go though the array of values (left to right). Let's call it a pass. During the pass pairs of values are checked and swapped to be in correct order (higher right).
During first pass the maximum value will be reached. When reached, the max will be higher then value next to it, so they will be swapped. This means that max will become part of next pair in the pass. This repeats until pass is completed and max moves to the right end of the array.
During second pass the same is true for the second highest value in the array. Only difference is it will not be swapped with the max at the end. Now two most right values are correctly set.
In every next pass one value will be sorted out to the right.
There are N values and N passes. This means that after N passes all N values will be sorted like:
{kth largest, (k-1)th largest,...... 2nd largest, largest}
No it isn't. It is worse. There is no point whatsoever in the variable cntr1. You should be using test1 here, and you should be referring to one of the many canonical implementations of bubblesort rather than trying to make it up for yourself.
For example:
array[] = {3, 9, 10, **12**,1,4,**7**,2,**6**,***5***}
First, I need maximum value=12 then I need maximum value among the rest of array (1,4,7,2,6,5), so value=7, then maxiumum value of the rest of array 6, then 5, After that, i will need series of this values. This gives back (12,7,6,5).
How to get these numbers?
I have tried the following code, but it seems to infinite
I think I'll need a recursive function but how can I do this?
max=0; max2=0;...
for(i=0; i<array_length; i++){
if (matrix[i] >= max)
max=matrix[i];
else {
for (j=i; j<array_length; j++){
if (matrix[j] >= max2)
max2=matrix[j];
else{
...
...for if else for if else
...??
}
}
}
}
This is how you would do that in C++11 by using the std::max_element() standard algorithm:
#include <vector>
#include <algorithm>
#include <iostream>
int main()
{
int arr[] = {3,5,4,12,1,4,7,2,6,5};
auto m = std::begin(arr);
while (m != std::end(arr))
{
m = std::max_element(m, std::end(arr));
std::cout << *(m++) << std::endl;
}
}
Here is a live example.
This is an excellent spot to use the Cartesian tree data structure. A Cartesian tree is a data structure built out of a sequence of elements with these properties:
The Cartesian tree is a binary tree.
The Cartesian tree obeys the heap property: every node in the Cartesian tree is greater than or equal to all its descendants.
An inorder traversal of a Cartesian tree gives back the original sequence.
For example, given the sequence
4 1 0 3 2
The Cartesian tree would be
4
\
3
/ \
1 2
\
0
Notice that this obeys the heap property, and an inorder walk gives back the sequence 4 1 0 3 2, which was the original sequence.
But here's the key observation: notice that if you start at the root of this Cartesian tree and start walking down to the right, you get back the number 4 (the biggest element in the sequence), then 3 (the biggest element in what comes after that 4), and the number 2 (the biggest element in what comes after the 3). More generally, if you create a Cartesian tree for the sequence, then start at the root and keep walking to the right, you'll get back the sequence of elements that you're looking for!
The beauty of this is that a Cartesian tree can be constructed in time Θ(n), which is very fast, and walking down the spine takes time only O(n). Therefore, the total amount of time required to find the sequence you're looking for is Θ(n). Note that the approach of "find the largest element, then find the largest element in the subarray that appears after that, etc." would run in time Θ(n2) in the worst case if the input was sorted in descending order, so this solution is much faster.
Hope this helps!
If you can modify the array, your code will become simpler. Whenever you find a max, output that and change its value inside the original array to some small number, for example -MAXINT. Once you have output the number of elements in the array, you can stop your iterations.
std::vector<int> output;
for (auto i : array)
{
auto pos = std::find_if(output.rbegin(), output.rend(), [i](int n) { return n > i; }).base();
output.erase(pos,output.end());
output.push_back(i);
}
Hopefully you can understand that code. I'm much better at writing algorithms in C++ than describing them in English, but here's an attempt.
Before we start scanning, output is empty. This is the correct state for an empty input.
We start by looking at the first unlooked at element I of the input array. We scan backwards through the output until we find an element G which is greater than I. Then we erase starting at the position after G. If we find none, that means that I is the greatest element so far of the elements we've searched, so we erase the entire output. Otherwise, we erase every element after G, because I is the greatest starting from G through what we've searched so far. Then we append I to output. Repeat until the input array is exhausted.
I am currently trying to teach myself C++ and programming in general. So as a beginner project i'm making a genetic algorithm that creates an optimal AI for a Tic-Tac-Toe game. I am not enrolled in any programming classes so this is not homework. I'm just really interested in AI.
So i am trying to create a multidimensional array of a factorial, in my case 9! . For example if you made one of 3! it would be array[3][6] = { {1, 2, 3}, {1, 3, 2}, {2, 3, 1}, {2, 1, 3}, {3, 2, 1}, {3, 1, 2}}. Basically 3! or 3*2*1 would be the amount of ways you could arrange 3 numbers in order.
I think that the solution should be simple yet im stuck trying to find out how to come up with a simple solution. I have tried to swap them, tried to shift them right, increment ect.. the methods that work are the obvious ones and i don't know how to code them.
So if you know how to solve it that's great. If you can give a coding format that's better . Any help is appreciated.
Also i'm coding this in c++.
You can use next_permutation function of STL
http://www.cplusplus.com/reference/algorithm/next_permutation/
I actually wrote an algorithm for this by hand once. Here it is:
bool incr(int z[NUM_INDICES]){
int a=NUM_INDICES-1;
for(int i=NUM_INDICES-2;i>=0;i--)
if(z[i]>z[i+1]) a--;
else break;
if(a==0) return false;
int b=2147483647,c;
for(int i=a;i<=NUM_INDICES-1;i++)
if(z[i]>z[a-1]&&z[i]-z[a-1]<b){
b=z[i]-z[a-1];
c=i;
}
int temp=z[a-1]; z[a-1]=z[c]; z[c]=temp;
qsort(z+a,NUM_INDICES-a,sizeof(int),comp);
return true;
}
This is the increment function (i.e. you have an array like [3,2,4,1], you pass it to this, and it modifies it to [3,4,1,2]). It works off the fact that if the last d elements of the array are in descending order, then the next array (in "alphabetical" order) should satisfy the following conditions: 1) the last d+1 elements are a permutation among themselves; 2) the d+1-th to last element is the next highest element in the last d+1 elements; 3) the last d elements should be in ascending order. You can see this intuitively when you have something like [2,5,3, 8,7,6,4,1]: d = 5 in this case; the 3 turns into the next highest of the last d+1 = 6 elements; and the last d = 5 are arranged in ascending order, so it becomes [2,5,4, 1,3,6,7,8].
The first loop basically determines d. It loops over the array backwards, comparing consecutive elements, to determine the number of elements at the end that are in descending order. At the end of the loop, a becomes the first element that is in the descending order sequence. If a==0, then the whole array is in descending order and nothing more can be done.
The next loop determines what the d+1-th-to-last element should be. We specified that it should be the next highest element in the last d+1 elements, so this loop determines what that is. (Note that z[a-1] is the d+1-th-to-last element.) By the end of that loop, b contains the lowest z[i]-z[a-1] that is positive; that is, z[i] should be greater than z[a-1], but as low as possible (so that z[a-1] becomes the next highest element). c contains the index of the corresponding element. We discard b because we only need the index.
The next three lines swap z[a-1] and z[c], so that the d+1-th-to-last element gets the element next in line, and the other element (z[c]) gets to keep z[a-1]. Finally, we sort the last d elements using qsort (comp must be declared elsewhere; see C++ documentation on qsort).
If you want a hand crafted function for generating all permutations, you can use
#include <cstdio>
#define REP(i,n) FOR(i,0,n)
#define FOR(i,a,b) for(int i=a;i<b;i++)
#define GI ({int t;scanf("%d",&t);t;})
int a[22], n;
void swap(int & a, int & b) {
int t = a; a = b; b = t;
}
void perm(int pos) {
if(pos==n) {
REP(i,n) printf("%d ",a[i]); printf("\n");
return;
}
FOR(i,pos,n) {
swap(a[i],a[pos]);
perm(pos+1);
swap(a[pos],a[i]);
}
return;
}
int main (int argc, char const* argv[]) {
n = GI;
REP(i,n) a[i] = GI;
perm(0);
return 0;
}
There is an array of n numbers. One number is repeated n/2 times and other n/2 numbers are distinct.Find the repeated number. (Best soln is o(n) exactly n/2+1 comparisons.)
the main problem here is n/2+1 comparisons.
i have two solutions for O(n) but they are taking more than n/2+1 comparisons.
1> divide the numbers of array in groups of three.compare those n/3 groups for any same elements.
e.g array is (1 10 3) (4 8 1) (1 1)....so number of comparisons required is 7 which is >n/2+1
i.e 8/2+1=5
2> compare a[i] with a[i+1] and a[i+2]
e.g array is 8 10 3 4 1 1 1 1
total 9 comparisons
i appreciate even a little help.
thank you
space complexity is O(1).
of course if all other are distinct you only have to compare all pairs. If you find one pair whit two equal numbers you have this number
lets say you have numbers like this (it is just about indexing)
[1,2,3,4,5,6,7,8,9,10]
you then make n/2 + 1 comparisons like this
(1,2),(3,4),(5,6),(7,8),(9,7),(9,8)
if all pairs are distinct you return 10.
Point is then when you compare last 4 remaining numbers (7,8,9,10) you know that among then are at least two same numbers and you have 3 comparisons.
You just need to find the number that exists twice in the array.
You just start from the beginning, keep a hash or something of numbers you've already seen, when you get to a number that appears twice just stop.
worst cat scenario: you see all the n/2 distinct numbers first, and then the next number is a repeat.... n/2+2 (because the number you're looking for isn't part of the n/2 unique numbers)
Read the part about O(1) space complexity too late, but anyway, here is my solution:
#include <iterator>
#include <unordered_set>
template <typename ForwardIterator>
ForwardIterator find_repeated_element(ForwardIterator begin, ForwardIterator end)
{
typedef typename std::iterator_traits<ForwardIterator>::value_type value_type;
std::unordered_set<value_type> visited_elements;
for (; begin != end; ++begin)
{
bool could_insert = visited_elements.insert(*begin).second;
if (!could_insert) return begin;
}
return end;
}
#include <iostream>
int main()
{
int test[] = {8, 10, 3, 4, 1, 1, 1, 1};
int* end = test + sizeof test / sizeof *test;
int* p = find_repeated_element(test, end);
if (p == end)
{
std::cout << "the was no repeated element\n";
}
else
{
std::cout << "repeated element: " << *p << "\n";
}
}
Due to Pigeon hole principle, you only need to test the first n/2+1 members of the array since the repeated number for certain will be repeated at least twice. Loop through each member, using a hash table to keep track, and stop when there is a member that is repeated twice.
Another solution for O(n) (but not exactly n/2+1), but with O(1) space:
Because you have n/2 of that number, then if you look at it as a sorted array, there are to scenarios for its position:
Either it's the lowest number, so it will take positions 1-n/2 .. or it's not, and then for sure it's in position n/2+1 .
So, you can use a selection algorithm, and retrieve 4 elements: the range [(n/2-1),(n/2+1)] in size
We want then number k in size, so that's ok with the algorithm.
Then the repeated number has to be at least twice in those 4 numbers (simple check)
So total complexity: 4*O(n) + O(1) = O(n)
Regarding complexity O(n/2+1) and space complexity O(1) you can (almost) meet the requirements with this approach:
Compare tuples:
a[x] == a[x+1], a[x+2] == a[x+3] ... a[n-1] == a[n]
If no match is found increase step:
a[x] == a[x+2], a[x+1] == a[x+3]
This will in worst case run in O(n/2+2) (but always in O(1) space) when you have an array like this: [8 1 10 1 3 1 4 1]
qsort( ) the array then scan for first repeat.