Longest Increasing and Decreasing subsequence (Top-Down with memoization) - c++

Question - Given an array of integers, A of length N, find the length of longest subsequence which is first increasing then decreasing.
Input:[1, 11, 2, 10, 4, 5, 2, 1]
Output: 6
Explanation:[1 2 10 4 2 1] is the longest subsequence.
I wrote a top-down approach. I have five arguments - vector A(containing the sequence), start index(denoting the current index), previous value, large(denoting maximum value in current subsequence) and map(m) STL.
For the backtrack approach I have two cases -
element is excluded - In this case we move to next element(start+1). prev and large remains same.
element is included - having two cases
a. if current value(A[start]) is greater than prev and prev == large then this is the case
of increasing sequence. Then equation becomes 1 + LS(start+1, A[start], A[start]) i.e.
prev becomes current element(A[start]) and largest element also becomes A[start].
b. if current value (A[start]) is lesser than prev and current (A[start]) < large then
this is the case of decreasing sequence. Then equation becomes 1 + LS(start+1, A[start],
large) i.e. prev becomes current element(A[start]) and largest element remains same i.e.
large.
Base Cases -
if current index is out of the array i.e. start == end then return 0.
if sequence is decreasing and then increasing then return 0.
i.e. if(current> previous and previous < maximum value) then return 0.
This is not an optimized approach approach as map.find() is itself a costly operation. Can someone suggest optimized top-down approach with memoization.
int LS(const vector<int> &A, int start, int end, int prev, int large, map<string, int>&m){
if(start == end){return 0;}
if(A[start] > prev && prev < large){
return 0;
}
string key = to_string(start) + '|' + to_string(prev) + '|' + to_string(large);
if(m.find(key) == m.end()){
int excl = LS(A, start+1, end, prev, large, m);
int incl = 0;
if(((A[start] > prev)&&(prev==large))){
incl = 1 + LS(A, start+1, end, A[start],A[start], m);
}else if(((A[start]<prev)&&(A[start]<large))){
incl = 1+ LS(A, start+1, end, A[start], large, m);
}
m[key] = max(incl, excl);
}
return m[key];
}
int Solution::longestSubsequenceLength(const vector<int> &A) {
map<string, int>m;
return LS(A, 0, A.size(), INT_MIN, INT_MIN, m);
}

Not sure about top-down but it seems we could use the classic LIS algorithm to just approach each element from "both sides" as it were. Here's the example with each element as the rightmost and leftmost, respectively, as we iterate from both directions. We can see three instances of a valid sequence of length 6:
[1, 11, 2, 10, 4, 5, 2, 1]
1 11 11 10 4 2 1
1 2 2 1
1 2 10 10 4 2 1
1 2 4 4 2 1
1 2 4 5 5 2 1
1 2 2 1

Related

How can we calculate, for every element in an array, the number of elements to the right that are greater than that element?

Given an array A with n values, let X of A be an array that holds in index i the number of elements which are bigger than A[i] and are to its right side in the original array A.
For example, if A was: [10,12,8,17,3,24,19], then X(A) is: [4,3,3,2,2,0,0]
How can I solve this in O(n log(n)) Time and O(n) Space complexity?
I can solve this easily in O(n^2) Time and O(1) Space by using a loop and, for every element, counting how many elements are bigger than it on the right side, but I wasn't successful with those requirements.
I was thinking about using quick sort with can be done in O(n log(n)) at worst, but I don't see how the sorted array could help here.
Note: Regarding quick sort the algorithm needs some tweak to insure O(n log(n)) at worst and not only on average.
Quick summary of the problem statement: Given an array A which contains N integers, construct an array X such that for every i, X[i] = the number of elements in A that have an index greater than i and are also greater than A[i].
One way to solve this problem would be to use a binary search tree. Start by iterating from the last to the first element, adding each element to the set as we iterate. Every time we are at an element e, use the binary search tree's find() operation to find how many elements are greater than e in the current tree.
Perhaps your first thought would be to use a std::multiset (not std::set because we may have duplicate elements!), which is a self-balancing binary search tree that offers O(logN) insertion and O(logN) element finding. This seems like it would work for this algorithm, but it actually wouldn't. The reason is because when you call std::multiset::find(), it returns an iterator to the element in the set. Finding how many elements in the set are actually greater than the element would take O(N) time, as to find the distance from the iterator to the end of the set would require incrementing it repeatedly.
To solve this problem, we use an "indexed multiset", which is a slightly modified binary search tree such that we can find the index of an element in the multiset in O(logN) time while still supporting O(logN) insertion. Here's my code demonstrating this data structure:
#include <iostream>
#include <vector>
#include <ext/pb_ds/assoc_container.hpp>
using namespace std;
using namespace __gnu_pbds;
// I know this is kind of messy, but it's the general way to get a C++ indexed
// multiset without using an external library
typedef tree <int, null_type, less_equal <int>, rb_tree_tag,
tree_order_statistics_node_update> indexed_set;
int main()
{
int A_size;
cin >> A_size;
vector <int> A(A_size);
for(int i = 0; i < A_size; ++i){
cin >> A[i];
}
// Input Done
indexed_set nums;
vector <int> X(A_size);
for(int i = A_size - 1; i >= 0; --i){
// order_of_key returns the first index that A[i] would be at in a sorted list
// with the same elements as nums.
X[i] = nums.size() - nums.order_of_key(A[i]);
nums.insert(A[i]);
}
for(int item : X){
cout << item << " ";
}
cout << "\n";
return 0;
}
So, overall, the general strategy would be to
Iterate from the last element to the first element.
For every element, check in nums to see how many elements are greater than the current element. (O(logN))
Then, insert the current element and continue to iterate. (O(logN))
Clearly, the total time complexity of this algorithm is O(NlogN) and the space complexity is O(N).
A quick summary of the observations and insights of this method:
INSIGHT: If we iterate from the last to the first element (not the first to the last), the indexed-set will only contain elements to the right of the current element at any given iteration, which is exactly what we want. This saves us time because we don't need to worry about inserting all the elements at the beginning then removing them one by one if we were to iterate from left to right.
OBSERVATION: A std::set wouldn't suffice for the binary search tree in this algorithm because although it provides O(logN) finding an element, calculating the elements position in the set requires a worst case of O(N) time. An indexed-set, however, provides this "position-finding" operation in O(logN) time, as well as insertion.
Telescope has first mentioned (in the comments) that you can use a Binary tree to achieve that. However, you can do it also with the following alternative approach:
Use an AVL tree;
Each node should store the element and the number of elements on its right sub-tree;
Iterate the array from the end to the beginning;
Add to the tree and update the size on the nodes accordingly.
While adding compare the current element against the root; If this element is smaller than the root then it is smaller than all the elements to the right sub-tree. In this case, take the size from that node, and proceed to the left sub-tree and apply the same logic. Add the final size to the corresponded position on the array X;
If it is not smaller than the root then increase the size of the root and proceed to the appropriated sub-tree. And apply the aforementioned logic.
The time complexity will be N times inserting into the tree. Hence, O(n log(n)). And the space complexity will be naturally O(N).
Visualization :
A : [10,12,8,17,3,24,19];
X(A) [? ,? ,? ,? ,? ,? ,?]
Right Tree Node Size : S [?,?,?,?,?,?,?]
Inserting 19:
No elements in the right sub-tree therefore:
size of 19 = 0;
X(A) [? ,? ,? ,? ,? ,? ,0]
S [?, ?, ?, ?, ?, ?, 0]
Inserting 24:
24 is greater than root (i.e., 19) so let us increase the size of the root and proceed to the sub right tree.
Size of 24 = 0
X(A) [? ,? ,? ,? ,? ,0 ,0]
S [?, ?, ?, ?, ?, 0, 1]
Inserting 3:
3 is smaller than the root (i.e., 19) and the size of the root is 1, therefore there are 2 elements bigger than 3 the root and its right sub-tree. Let us go to the left;
Size of 3 = 0
X(A) [? ,? ,? ,? ,2 ,0 ,0]
S [? , ?, ?, ?, 0, 0, 1]
Inserting 17:
17 is smaller than the root (i.e., 19) and the size of the root is 1, therefore there are 2 elements bigger than 17 the root and its right sub-tree. Let us go to the left, 17 is bigger than the root (i.e., 3), let us increase the size of node 3 from 0 to 1, and go to the right sub-tree.
Size of 17 = 0
X(A) [? ,? ,? ,2 ,2 ,0 ,0]
S [? ,? ,? ,0 ,1 ,0 ,1]
Inserting 8:
8 is smaller than the root (i.e., 19) and the size of the root is 1, therefore there are 2 elements bigger than 8 the root and its right sub-tree. Let us go to the left, 8 is bigger than the root (i.e., 3), let us increase the size of node 3 from 1 to 2, and go to the right sub-tree. 8 is also smaller than the root (i.e., 17) so 8 is smaller to three elements so far. Let us go to the left.
Size of 8 = 0
X(A) [? ,? ,3 ,2 ,2 ,0 ,0]
S [? ,? ,0 ,0 ,2 ,0 ,1]
With the insertion of the node 8 a rotation is performed to balance the tree.
During the rotation the sizes are also updated, namely the size of node 8 changed from 0 to 1 and the size of node 3 from 2 to 0.: - S [? ,? ,1 ,0 ,0 ,0 ,1]
Inserting 12:
12 is smaller than the root (i.e., 19) and the size of the root is 1, therefore there are 2 elements bigger than 12 the root and its right sub-tree. Let us go to the left, 12 is bigger than the root (i.e., 8), let us increase the size of node 8 from 1 to 2, and go to the right sub-tree. 12 is also smaller than the root (i.e., 17) so 12 is smaller to three elements so far. Let us go to the left.
Size of 12 = 0
X(A) [? ,3 ,3 ,2 ,2 ,0 ,0]
S [? ,0 ,0 ,0 ,2 ,0 ,1]
With the insertion of the node 12 a double rotation is performed to balance the tree.
During the rotation the sizes are also updated - S [? ,0 ,1 ,2 ,0 ,0 ,1]
Inserting 10:
10 is smaller than the root (i.e., 17) and the size of the root is 2, therefore there are 3 elements bigger than 10 the root and its right sub-tree. Let us go to the left, 10 is bigger than the root (i.e., 8), let us increase the size of node 8 from 1 to 2, and go to the right sub-tree. 10 is also smaller than the root (i.e., 12) so 10 is smaller to 4 elements so far. Let us go to the left.
Size of 10 = 0
X(A) [4 ,3 ,3 ,2 ,2 ,0 ,0]
S [0 ,0 ,0 ,0 ,2 ,0 ,1]
A possible C implementation (the AVL code was adapted from source):
#include<stdio.h>
#include<stdlib.h>
struct Node{
int key;
struct Node *left;
struct Node *right;
int height;
int size;
};
int height(struct Node *N){
return (N == NULL) ? 0 : N->height;
}
int sizeRightTree(struct Node *N){
return (N == NULL || N -> right == NULL) ? 0 : N->right->height;
}
int max(int a, int b){
return (a > b) ? a : b;
}
struct Node* newNode(int key){
struct Node* node = (struct Node*) malloc(sizeof(struct Node));
node->key = key;
node->left = NULL;
node->right = NULL;
node->height = 1;
node->size = 0;
return(node);
}
struct Node *rightRotate(struct Node *y) {
struct Node *x = y->left;
struct Node *T2 = x->right;
x->right = y;
y->left = T2;
y->height = max(height(y->left), height(y->right))+1;
x->height = max(height(x->left), height(x->right))+1;
y->size = sizeRightTree(y);
x->size = sizeRightTree(x);
return x;
}
struct Node *leftRotate(struct Node *x){
struct Node *y = x->right;
struct Node *T2 = y->left;
y->left = x;
x->right = T2;
x->height = max(height(x->left), height(x->right))+1;
y->height = max(height(y->left), height(y->right))+1;
y->size = sizeRightTree(y);
x->size = sizeRightTree(x);
return y;
}
int getBalance(struct Node *N){
return (N == NULL) ? 0 : height(N->left) - height(N->right);
}
struct Node* insert(struct Node* node, int key, int *size){
if (node == NULL)
return(newNode(key));
if (key < node->key){
*size = *size + node->size + 1;
node->left = insert(node->left, key, size);
}
else if (key > node->key){
node->size++;
node->right = insert(node->right, key, size);
}
else
return node;
node->height = 1 + max(height(node->left), height(node->right));
int balance = getBalance(node);
if (balance > 1 && key < node->left->key)
return rightRotate(node);
if (balance < -1 && key > node->right->key)
return leftRotate(node);
if (balance > 1 && key > node->left->key){
node->left = leftRotate(node->left);
return rightRotate(node);
}
if (balance < -1 && key < node->right->key){
node->right = rightRotate(node->right);
return leftRotate(node);
}
return node;
}
int main()
{
int arraySize = 7;
struct Node *root = NULL;
int A[7] = {10,12,8,17,3,24,19};
int X[7] ={0};
for(int i = arraySize - 1; i >= 0; i--)
root = insert(root, A[i], &X[i]);
for(int i = 0; i < arraySize; i++)
printf("%d ", X[i]);
printf("\n");
return 0;
}
OUTPUT:
4 3 3 2 2 0 0
something similar to merge sort where counting in inserted after processing right and before processing left side of range, ex:
#include <algorithm>
#include <functional>
void count_greater_on_right( int* a, int* x, int begin, int end )
{
if( end - begin <= 2 )
{
if( end - begin == 2 && a[begin] < a[begin+1] )
{
x[begin]+=1; // specific
std::swap( a[begin], a[begin+1] );
}
return;
}
int middle =(begin+end+1)/2;
count_greater_on_right( a, x, middle, end );
// specific
{
for( int i=begin; i!=middle; ++i )
{
x[i]+=std::lower_bound( &a[middle], &a[end], a[i], std::greater<int>() )-&a[middle];
}
}
count_greater_on_right( a, x, begin, middle );
std::inplace_merge( &a[begin], &a[middle], &a[end], std::greater<int>() );
}
code, specific to the task, is commented with // specific;
reverse order of sorting makes it slightly simpler IMHO;
updates 'a' so if you need original sequence, create copy.
Problem can be solved if array is dived into sub ranges and then sort these sub ranges. Let's see in details,
Given array = [10, 12, 8, 17, 3, 24, 19]
Now divide array in sub range of length 4 and sort these sub ranges as showed below,
Sub rang sorted array
.................... ...............
| 8 | 10 | 12 | 17 | | 3 | 19 | 24 |
.................... ...............
2 0 1 3 4 6 5 => index
Let's take first entry of sub range sorted array which is 8 and try to find number of right elements greater then of 8
As you see in above number 8 belong to first sub range and because sub ranges are sorted, elements in sub range are in ascending order but not in there index order. It means in current sub range, we have to compare index of all elements at right of element 8 with index of element 8
Index of 8 is 2 but 10 have index = 0, it means 10 is at left of 8 in input array,
Index of 12 is also less then index of 8, it means 12 is at left of 8 in input array,
Index of 17 is 3 which is greater than index of 8, it means 17 is at the right of 8 in input array and can be considered greater element,
After comparing index of 8 with index of all right elements of current sub range, right greater element count = 1, let's look at the next range,
After sub range of 8, things got totally changed, now we know this sub range is at the right of the sub range element 8 belongs, it means we don't have to compare index of 8 with elements or this range, all are at right of element 8 and we only have to find how many are greater than 8,
Now we compare first element of right sub range with 8 and as you can see above first element is 3 which is less then 8 but in case if first element of right sub range is greater then current element then we can increment count directly to number of elements present in right sub range.
Because first element 3 is less then 8 we find upper bound of 8 in right sub range and that is 19 and all the elements from 19 in right sub range are greater than 8, so there are two elements 19, 24 and due to this count incremented by two and become count = 3
Finally there are 3 right elements greater than element 8.
In similar way number of greater right elements can be found for all elements and result array would be,
x(A) = [4, 3, 3, 2, 2, 0, 0]
Conclusion is, by dividing input array in sorted sub range, greater elements at right can be found by following steps,
Compare index of all right elements of current sub range,
Compare first element of right sub range and if,
i. first element is greater then current element than all element of right range are greater then current element,
ii. first element is less than, then find upper bound of current element in right sub range, and elements from upper bound in right sub range are greater than current element.
Repeat step 2 for all right sub ranges.
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
using std::cout;
std::vector<std::pair<int, std::size_t>> arrayOfSortedSubRange(std::size_t subRangeSize,
const std::vector<int>& numArr){
std::vector<std::pair<int, std::size_t>> res;
res.reserve(numArr.size());
for(std::size_t i = 0, numArrSize = numArr.size(); i < numArrSize; ++i){
res.emplace_back(numArr[i], i);
}
for(std::vector<std::pair<int, std::size_t>>::iterator it = res.begin(), endIt = res.end(); endIt != it;){
std::vector<std::pair<int, std::size_t>>::iterator rangeEndIt = it + std::min<std::ptrdiff_t>(endIt - it,
subRangeSize);
std::sort(it, rangeEndIt, [](const std::pair<int, std::size_t>& a, const std::pair<int, std::size_t>& b){
return a.first < b.first;});
it = rangeEndIt;
}
return res;
}
std::size_t rightGreterElmentCountOfNumber(int num, std::vector<std::pair<int, std::size_t>>::const_iterator rightSubRangeIt,
std::vector<std::pair<int, std::size_t>>::const_iterator endIt){
std::size_t count = 0;
std::vector<std::pair<int, std::size_t>>::const_iterator subRangEndIt = rightSubRangeIt +
std::min<std::ptrdiff_t>(endIt - rightSubRangeIt, 4);
while(endIt != rightSubRangeIt){
if(rightSubRangeIt->first > num){
count += subRangEndIt - rightSubRangeIt;
}
else{
count += subRangEndIt -
std::upper_bound(rightSubRangeIt, subRangEndIt, num, [](int num,
const std::pair<int, std::size_t>& element){ return num < element.first;});
}
rightSubRangeIt = subRangEndIt;
subRangEndIt += std::min<std::ptrdiff_t>(endIt - subRangEndIt, 4);
}
return count;
}
std::vector<std::size_t> rightGreaterElementCountForLessThanFiveNumbers(const std::vector<int>& numArr){
std::vector<std::size_t> res(numArr.size(), 0);
std::vector<std::size_t>::iterator resIt = res.begin();
for(std::vector<int>::const_iterator it = numArr.cbegin(), lastIt = it + (numArr.size() - 1); lastIt != it;
++it, ++resIt){
*resIt = std::count_if(it + 1, numArr.cend(), [num = *it](int rightNum){return rightNum > num;});
}
return res;
}
std::vector<std::size_t> rightGreaterElementCount(const std::vector<int>& numArr){
if(numArr.size() < 5){
return rightGreaterElementCountForLessThanFiveNumbers(numArr);
}
std::vector<std::size_t> resArr(numArr.size(), 0);
std::vector<std::pair<int, std::size_t>> subRangeSortedArr = arrayOfSortedSubRange(4, numArr);
for(std::vector<std::pair<int, std::size_t>>::const_iterator it = subRangeSortedArr.cbegin(),
endIt = subRangeSortedArr.cend(); endIt != it;){
std::vector<std::pair<int, std::size_t>>::const_iterator rightNextSubRangeIt = it + std::min<std::ptrdiff_t>(
endIt - it, 4);
for(std::vector<std::pair<int, std::size_t>>::const_iterator eleIt = it; rightNextSubRangeIt != eleIt; ++eleIt){
std::size_t count = std::count_if(eleIt, rightNextSubRangeIt, [index = eleIt->second](
const std::pair<int, std::size_t>& element){ return index < element.second;});
if(endIt != rightNextSubRangeIt){
count += rightGreterElmentCountOfNumber(eleIt->first, rightNextSubRangeIt, endIt);
}
resArr[eleIt->second] = count;
}
it += std::min<std::ptrdiff_t>(endIt - it, 4);
}
return resArr;
}
int main(){
std::vector<std::size_t> res = rightGreaterElementCount({10, 12, 8, 17, 3, 24, 19});
cout<< "[10, 12, 8, 17, 3, 24, 19] => [";
std::copy(res.cbegin(), res.cbegin() + (res.size() - 1), std::ostream_iterator<std::size_t>(cout, ", "));
cout<< res.back()<< "]\n";
}
Output:
[10, 12, 8, 17, 3, 24, 19] => [4, 3, 3, 2, 2, 0, 0]

Time complexity of an iterative algorithm

I am trying to find the Time Complexity of this algorithm.
The iterative: algorithm produces all the bit-strings within a given Hamming distance, from the input bit-string. It generates all increasing sequences 0 <= a[0] < ... < a[dist-1] < strlen(num), and reverts bits at corresponding indices.
The vector a is supposed to keep indices for which bits have to be inverted. So if a contains the current index i, we print 1 instead of 0 and vice versa. Otherwise we print the bit as is (see else-part), as shown below:
// e.g. hamming("0000", 2);
void hamming(const char* num, size_t dist) {
assert(dist > 0);
vector<int> a(dist);
size_t k = 0, n = strlen(num);
a[k] = -1;
while (true)
if (++a[k] >= n)
if (k == 0)
return;
else {
--k;
continue;
}
else
if (k == dist - 1) {
// this is an O(n) operation and will be called
// (n choose dist) times, in total.
print(num, a);
}
else {
a[k+1] = a[k];
++k;
}
}
What is the Time Complexity of this algorithm?
My attempt says:
dist * n + (n choose t) * n + 2
but this seems not to be true, consider the following examples, all with dist = 2:
len = 3, (3 choose 2) = 3 * O(n), 10 while iterations
len = 4, (4 choose 2) = 6 * O(n), 15 while iterations
len = 5, (5 choose 2) = 9 * O(n), 21 while iterations
len = 6, (6 choose 2) = 15 * O(n), 28 while iterations
Here are two representative runs (with the print to be happening at the start of the loop):
000, len = 3
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
k = 0, total_iter = 5
vector a = 0 3
k = 1, total_iter = 6
vector a = 1 1
Paid O(n)
k = 1, total_iter = 7
vector a = 1 2
k = 0, total_iter = 8
vector a = 1 3
k = 1, total_iter = 9
vector a = 2 2
k = 0, total_iter = 10
vector a = 2 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gsamaras#pythagoras:~/Desktop/generate_bitStrings_HammDistanceT$ ./iter
0000, len = 4
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
Paid O(n)
k = 1, total_iter = 5
vector a = 0 3
k = 0, total_iter = 6
vector a = 0 4
k = 1, total_iter = 7
vector a = 1 1
Paid O(n)
k = 1, total_iter = 8
vector a = 1 2
Paid O(n)
k = 1, total_iter = 9
vector a = 1 3
k = 0, total_iter = 10
vector a = 1 4
k = 1, total_iter = 11
vector a = 2 2
Paid O(n)
k = 1, total_iter = 12
vector a = 2 3
k = 0, total_iter = 13
vector a = 2 4
k = 1, total_iter = 14
vector a = 3 3
k = 0, total_iter = 15
vector a = 3 4
The while loop is somewhat clever and subtle, and it's arguable that it's doing two different things (or even three if you count the initialisation of a). That's what's making your complexity calculations challenging, and it's also less efficient than it could be.
In the abstract, to incrementally compute the next set of indices from the current one, the idea is to find the last index, i, that's less than n-dist+i, increment it, and set the following indexes to a[i]+1, a[i]+2, and so on.
For example, if dist=5, n=11 and your indexes are:
0, 3, 5, 9, 10
Then 5 is the last value less than n-dist+i (because n-dist is 6, and 10=6+4, 9=6+3, but 5<6+2).
So we increment 5, and set the subsequent integers to get the set of indexes:
0, 3, 6, 7, 8
Now consider how your code runs, assuming k=4
0, 3, 5, 9, 10
a[k] + 1 is 11, so k becomes 3.
++a[k] is 10, so a[k+1] becomes 10, and k becomes 4.
++a[k] is 11, so k becomes 3.
++a[k] is 11, so k becomes 2.
++a[k] is 6, so a[k+1] becomes 6, and k becomes 3.
++a[k] is 7, so a[k+1] becomes 7, and k becomes 4.
++a[k] is 8, and we continue to call the print function.
This code is correct, but it's not efficient because k scuttles backwards and forwards as it's searching for the highest index that can be incremented without causing an overflow in the higher indices. In fact, if the highest index is j from the end, the code uses a non-linear number iterations of the while loop. You can easily demonstrate this yourself if you trace how many iterations of the while loop occur when n==dist for different values of n. There is exactly one line of output, but you'll see an O(2^n) growth in the number of iterations (in fact, you'll see 2^(n+1)-2 iterations).
This scuttling makes your code needlessly inefficient, and also hard to analyse.
Instead, you can write the code in a more direct way:
void hamming2(const char* num, size_t dist) {
int a[dist];
for (int i = 0; i < dist; i++) {
a[i] = i;
}
size_t n = strlen(num);
while (true) {
print(num, a);
int i;
for (i = dist - 1; i >= 0; i--) {
if (a[i] < n - dist + i) break;
}
if (i < 0) return;
a[i]++;
for (int j = i+1; j<dist; j++) a[j] = a[i] + j - i;
}
}
Now, each time through the while loop produces a new set of indexes. The exact cost per iteration is not straightforward, but since print is O(n), and the remaining code in the while loop is at worst O(dist), the overall cost is O(N_INCR_SEQ(n, dist) * n), where N_INCR_SEQ(n, dist) is the number of increasing sequences of natural numbers < n of length dist. Someone in the comments provides a link that gives a formula for this.
Notice, that given n which represents the length, and t which represents the distance required, the number of increasing, non-negative series of t integers between 1 and n (or in indices form, between 0 and n-1) is indeed n choose t, since we pick t distinct indices.
The problem occurs with your generation of those series:
-First, notice that for example in the case of length 4, you actually go over 5 different indices, 0 to 4.
-Secondly, notice that you are taking in account series with identical indices (in the case of t=2, its 0 0, 1 1, 2 2 and so on), and generally, you would go through every non-decreasing series, instead of through every increasing series.
So for calculating the TC of your program, make sure you take that into account.
Hint: try to make one-to-one correspondence from the universe of those series, to the universe of integer solutions to some equation.
If you need the direct solution, take a look here :
https://math.stackexchange.com/questions/432496/number-of-non-decreasing-sequences-of-length-m
The final solution is (n+t-1) choose (t), but noticing the first bullet, in your program, its actually ((n+1)+t-1) choose (t), since you loop with one extra index.
Denote
((n+1)+t-1) choose (t) =: A , n choose t =: B
overall we get O(1) + B*O(n) + (A-B)*O(1)

Longest Increasing Sub sequence in a range

I have come across a problem where we want to tell the maximum size of the longest increasing sub-sequence.
an array A consisting of N integers.
M queries (Li, Ri)
for each query we wants to find the length of the longest increasing subsequence in
array A[Li], A[Li + 1], ..., A[Ri].
I implemented finding the sub-sequence using dp approach
// mind the REPN, LLD, these are macros I use for programming
// LLD = long long int
// REPN(i, a, b) = for (int i = a; i < b; ++i)
LLD a[n], dp[n];
REPN(i, 0, n)
{
scanf("%lld", &a[i]);
dp[i] = 1;
}
REPN(i, 1, n)
{
REPN(j, 0, i)
{
if(a[i] > a[j])
dp[i] = std::max(dp[j] + 1, dp[i]);
}
}
For example:
Array: 1 3 8 9 7 2 4 5 10 6
dplis: 1 2 3 4 3 1 3 4 5 5
max: 5
But if it was for range Li=2 & Ri=9
Then:
Array: 3 8 9 7 2 4 5 10
dplis: 1 2 3 2 1 2 3 4
max: 4
How can i determine the maximum longest increasing sub-sequence in a sub array?
PS: I don't want to recompute the whole dplis array, I want to use the original one because too much computation will kill the purpose of the question.
One of the approaches was to construct a complete 2D DP array that consists of sub-sequence from position i where range of i is from 0 to n, but it fails on many cases due to TLE(Time limit exceeded)
REPN(k,0,n) {
REPN(i,k+1,n) {
REPN(j,k,i) {
if(a[i]>a[j]) dp[k][i]=std::max(dp[k][j]+1, dp[k][i]);
}
}
}
REPN(i,0,q) {
read(l); read(r);
LLD max=-1;
REPN(i,0,r) {
if(max<dp[l-1][i]) max=dp[l-1][i];
}
printf("%lld\n", max);
}
If you have any new logic/implementation, I will gladly study it in-depth. Cheers.

Mathematically rotate an array of ordered numbers

Suppose you have a set of numbers in a given domain, for example: [-4,4]
Also suppose that this set of numbers is in an array, and in numerical order, like so:
[-4, -3 -2, -1, 0, 1, 2, 3, 4]
Now suppose I would like to create a new zero-point for this set of numbers, like so: (I select -2 to be my new axis, and all elements are shifted accordingly)
Original: [-4, -3 -2, -1, 0, 1, 2, 3, 4]
Zeroed: [-2, -1 0, 1, 2, 3, 4, -4, -3]
With the new zeroed array, lets say I have a function called:
"int getElementRelativeToZeroPosition(int zeroPos, int valueFromOriginalArray, int startDomain, int endDomain) {...}"
with example usage:
I am given 3 of the original array, and would like to see where it mapped to on the zeroed array, with the zero on -2.
getElementRelativeToZeroPosition(-2, 3, -4, 4) = -4
Without having to create any arrays and move elements around for this mapping, how would I mathematically produce the desired result of the function above?
I would proceed this way:
Get index of original zero position
Get index of new zero position (ie. index of -2 in you example)
Get index of searched position (index of 3)
Compute move vector between new and original zero position
Apply move vector to searched position modulo the array size to perform the rotation
Provided your array is zero-based:
index(0) => 4
index(-2) => 2
index(3) => 7
array_size => 9
move_vector => index(0) - index(-2)
=> 4 - 2 => +2
new_pos(3) => (index(3) + move_vector) modulo array_size
=> (7 + 2) mod 9 => 0
value_at(0) => -4
That's it
Mathematically speaking, if you have an implicit set of integers given by an inclusive range [start, stop], the choice of choosing a new zero point is really a choosing of an index to start at. After you compute this index, you can compute the index of your query point (in the original domain), and find the difference between them to get the offset:
For example:
Given: range [-4, 4], assume zero-indexed array (0,...,8) corresponding to values in the range
length(range) = 4 - (-4) + 1= 9
Choose new 'zero point' of -2.
Index of -2 is -2 - (-4) = -2 + 4 = 2
Query for position of 3:
Index in original range: 3 - (-4) = 3 + 4 = 7
Find offset of 3 in zeroed array:
This is the difference between the indices in the original array
7 - 2 = 5, so the element 3 is five hops away from element -2. Equivalently, it's 5-len(range) = 5 - 9 = -4 hops away. You can take the min(abs(5), abs(-4)) to see which one you'd prefer to take.
you can write a doubled linked list, with a head-node which points to the beginning
struct nodeItem
{
nodeItem* pev = nullptr;
nodeItem* next = nullptr;
int value = 0;
}
class Node
{
private:
nodeItem* head;
public:
void SetHeadToValue(int value);
...
}
The last value should point with next to the first one, so you have a circular list.
To figur out, if you are at the end of the list, you have to check if the item is equal to the head node

codility MaxDistanceMonotonic, what's wrong with my solution

Question:
A non-empty zero-indexed array A consisting of N integers is given.
A monotonic pair is a pair of integers (P, Q), such that 0 ≤ P ≤ Q < N and A[P] ≤ A[Q].
The goal is to find the monotonic pair whose indices are the furthest apart. More precisely, we should maximize the value Q − P. It is sufficient to find only the distance.
For example, consider array A such that:
A[0] = 5
A[1] = 3
A[2] = 6
A[3] = 3
A[4] = 4
A[5] = 2
There are eleven monotonic pairs: (0,0), (0, 2), (1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (3, 3), (3, 4), (4, 4), (5, 5). The biggest distance is 3, in the pair (1, 4).
Write a function:
int solution(vector &A);
that, given a non-empty zero-indexed array A of N integers, returns the biggest distance within any of the monotonic pairs.
For example, given:
A[0] = 5
A[1] = 3
A[2] = 6
A[3] = 3
A[4] = 4
A[5] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..300,000];
each element of array A is an integer within the range [−1,000,000,000..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
Here is my solution of MaxDistanceMonotonic:
int solution(vector<int> &A) {
long int result;
long int max = A.size() - 1;
long int min = 0;
while(A.at(max) < A.at(min)){
max--;
min++;
}
result = max - min;
while(max < (long int)A.size()){
while(min >= 0){
if(A.at(max) >= A.at(min) && max - min > result){
result = max - min;
}
min--;
}
max++;
}
return result;
}
And my result is like this, what's wrong with my answer for the last test:
If you have:
0 1 2 3 4 5
31 2 10 11 12 30
Your algorithm outputs 3, but the correct answer is 4 = 5 - 1.
This happens because your min goes to -1 on the first full run of the inner while loop, so the pair (1, 5) will never have the chance to get checked, max starting out at 4 when entering the nested whiles.
Note that the problem description expects O(n) extra storage, while you use O(1). I don't think it's possible to solve the problem with O(1) extra storage and O(n) time.
I suggest you rethink your approach. If you give up, there is an official solution here.