Minimum number of iterations - c++

We are given an array with numbers from ranging from 1 to n (no duplicates) where n = size of the array.
We are allowed to do the following operation :
arr[i] = arr[arr[i]-1] , 0 <= i < n
Now, one iteration is considered when we perform above operation on the entire array.
Our task is to find the number of iterations after we encounter a previously encountered sequence.
Constraints :
a) Array has no duplicates
b) 1 <= arr[i] <= n , 0 <= i < n
c) 1 <= n <= 10^6
Ex 1:
n = 5
arr[] = {5, 4, 2, 1, 3}
After 1st iteration array becomes : {3, 1, 4, 5, 2}
After 2nd iteration array becomes : {4, 3, 5, 2, 1}
After 3rd iteration array becomes : {2, 5, 1, 3, 4}
After 4th iteration array becomes : {5, 4, 2, 1, 3}
In the 4th iteration, the sequence obtained is already seen before
So the expected output is 4.
This question was asked in one of job hiring tests, so I dont have any link to the question.
There were 2 sample test cases given out of which I remember one which is given above. I would really appreciate any help on this question
P.S.
I was able to code the brute force solution, where in I stored all the results in a Set and then kept advancing to the next permutation. But it gave TLE

First, note that an array of length n containing 1, 2, ..., n with no duplicates is a permutation.
Next, observe that arr[i] := arr[arr[i] - 1] is squaring the permutation.
That is, consider permutations as elements of the symmetric group S_n, where multiplication is composition of permutations.
Then the above operation is arr := arr * arr.
So, in terms of permutations and their composition, the question is as follows:
You are given a permutation p (= arr).
Consider permutations p, p^2, p^4, p^8, p^16, ...
What is the number of distinct elements among them?
Now, to solve it, consider the cycle notation of the permutation.
Every permutation is a product of disjoint cycles.
For example, 6 1 4 3 5 2 is the product of the following cycles: (1 6 2) (3 4) (5).
In other words, every application of this permutation:
moves elements at positions 1, 6, 2 along the cycle;
moves elements at positions 4, 3 along the cycle;
leaves element at position 5 in place.
So, when we consider p^k (take an identity permutation and apply the permutation p to it k times), we actually process three independent actions:
move elements at positions 1, 6, 2 along the cycle, k times;
move elements at positions 4, 3 along the cycle, k times;
leave element at position 5 in place, k times.
Now, take into account that, after d applications of a cycle of length d, it just returns all the respective elements to their initial places.
So, we can actually formulate p^k as:
move elements at positions 1, 6, 2 along the cycle, (k mod 3) times;
move elements at positions 4, 3 along the cycle, (k mod 2) times;
leave element at position 5 in place.
We can now prove (using Chinese Remainder Theorem, or just using general knowledge of group theory) that the permutations p, p^2, p^3, p^4, p^5, ... are all distinct up to p^m, where m is the least common multiple of all cycle lengths.
In our example with p = 6 1 4 3 5 2, we have p, p^2, p^3, p^4, p^5, and p^6 all distinct.
But p^6 is the identity permutation: moving six times along a cycle of length 2 or 3 results in the items at their initial places.
So p^7 is the same as p^1, p^8 is the same as p^2, and so on.
Our question however is harder: we want to know the number of distinct permutations not among p, p^2, p^3, p^4, p^5, ..., but among p, p^2, p^4, p^8, p^16, ...: p to the power of a power of two.
To do that, consider all cycle lengths c_1, c_2, ..., c_r in our permutation.
For each c_i, find the pre-period and period of 2^k mod c_i:
For example, c_1 = 3, and 2^k mod 3 look as 1, 2, 1, 2, 1, 2, ..., which is (1, 2) with pre-period 0 and period 2.
As another example, c_2 = 2, and 2^k mod 2 look as 1, 0, 0, 0, ..., which is 1, (0) with pre-period 1 and period 1.
In this problem, this part can be done naively, by just marking visited numbers mod c_i in some array.
By Chinese Remainder Theorem again, after all pre-periods are considered, the period of the whole system of cycles will be the least common multiple of all individual periods.
What remains is to consider pre-periods.
These can be processed with your naive solution anyway, as the lengths of pre-periods here is at most log_2 n.
The answer is the least common multiple of all individual periods, calculated as above, plus the length of the longest pre-period.

Related

How many number are less than or equal than x in an array?

Given an integer n and array a, I need to find for each i, 1≤ i ≤ n, how many elements on the left are less than or equal to ai
Example:
5
1 2 1 1 2
Output
0 1 1 2 4
I can do it in O(N2) but I want to ask if there is any way to do it faster, since N is very large (N ≤ 106)?
You can use a segment tree, you just need to use a modified version called a range tree.
Range trees allow rectangle queries, so you can make the dimensions be index and value, and ask "What has value more than x, and index between 1 and n?"
Queries can be accomplished in O(log n) assuming certain common optimizations.
Either way O(N^2) is completely fine with N < 10^6.
I like to consider a bigger array to explain, so let's consider following array,
2, 1, 3, 4, 7, 6, 5, 8, 9, 10, 12, 0, 11, 13, 8, 9, 12, 20, 30, 60
The naïve way is to compare an element with all elements at left of it. Naïve approach has complexity of O(n^2) which make it not useful for big array.
If you look this problem closely you will find a pattern in it, and the pattern is Rather than comparing with each left element of an element we can compare first and last value of a range!. Wait a minute what is the range here?
These numbers can be viewed as ranges and there ranges can be created from traversing left to right in array. Ranges are as follows,
[2], [1, 3, 4, 7], [6], [5, 8, 9, 10, 12], [0, 11, 13], [8, 9, 12, 20, 30, 60]
Let’s start traversing array from left to right and see how we can create these ranges and how these ranges shall reduce the effort to find all small or equal elements at left of an element.
Index 0 have no element at its left to compare thus why we start form index 1, at this point we don’t have any range. Now we compare value of index 1 and index 0. Value 1 is not less than or equals to 2, so this is very import comparison, due to this comparison we know the previous range should end here because now numbers are not in acceding order and at this point we get first range [2], which contains only single element and number of elements less than or equals to left of element at index 1 is zero.
As continue with traversing left to right at index 2 we compare it with previous element which is at index 1 now value 1 <= 3 it means a new range is not staring here and we are still in same range which started at index 1. So to find how many elements less than or equals, we have to calculate first how many elements in current range [1, 3), in this case only one element and we have only one know range [2] at this point and it has one element which is less than 3 so total number of less than or equals elements at the left of element at index 2 is = 1 + 1 = 2. This can be done in similar way for rest of elements and I would like to jump directly at index 6 which is number 5,
At index 6, we have all ready discovered three ranges [2], [1, 3, 4, 7], [6] but only two ranges [2] and [1, 3, 4, 7] shall be considered. How I know in advance that range [6] is not useful without comparing will be explained at the end of this explanation. To find number of less than or equals elements at left, we can see first range [2] have only one element and it is less than 5, second range have first element 1 which is less than 5 but last element is 7 and it is greater than 5, so we cannot consider all elements of range rather we have to find upper bound in this range to find how many elements we can consider and upper bound can be found by binary search because range is sorted , so this range contains three elements 1, 3, 4 which are less then or equals to 5. Total number of elements less than or equals to 5 from two ranges is 4 and index 6 is first element of current range and there is no element at left of it in current range so total count = 1 + 3 + 0 = 4.
Last point on this explanation is, we have to store ranges in tree structure with their first value as key and value of the node should be array of pair of first and last index of range. I will use here std::map. This tree structure is required so that we can find all the range having first element less than or equals to our current element in logarithmic time by finding upper bound. That is the reason, I knew in advance when I was comparing element at index 6 that all three ranges known that time are not considerable and only two of them are considerable .
Complexity of solution is,
O(n) to travels from left to right in array, plus
O(n (m + log m)) for finding upper bound in std::map for each element and comparing last value of m ranges, here m is number of ranges know at particular time, plus
O(log q) for finding upper bound in a range if rage last element is greater than number, here q is number of element in particular range (It may or may not requires)
#include <iostream>
#include <map>
#include <vector>
#include <iterator>
#include <algorithm>
unsigned lessThanOrEqualCountFromRage(int num, const std::vector<int>& numList,
const std::map<int,
std::vector<std::pair<int, int>>>& rangeMap){
using const_iter = std::map<int, std::vector<std::pair<int, int>>>::const_iterator;
unsigned count = 0;
const_iter upperBoundIt = rangeMap.upper_bound(num);
for(const_iter it = rangeMap.cbegin(); upperBoundIt != it; ++it){
for(const std::pair<int, int>& range : it->second){
if(numList[range.second] <= num){
count += (range.second - range.first) + 1;
}
else{
auto rangeIt = numList.cbegin() + range.first;
count += std::upper_bound(rangeIt, numList.cbegin() +
range.second, num) - rangeIt;
}
}
}
return count;
}
std::vector<unsigned> lessThanOrEqualCount(const std::vector<int>& numList){
std::vector<unsigned> leftCountList;
leftCountList.reserve(numList.size());
leftCountList.push_back(0);
std::map<int, std::vector<std::pair<int, int>>> rangeMap;
std::vector<int>::const_iterator rangeFirstIt = numList.cbegin();
for(std::vector<int>::const_iterator it = rangeFirstIt + 1, endIt = numList.cend();
endIt != it;){
std::vector<int>::const_iterator preIt = rangeFirstIt;
while(endIt != it && *preIt <= *it){
leftCountList.push_back((it - rangeFirstIt) +
lessThanOrEqualCountFromRage(*it,
numList, rangeMap));
++preIt;
++it;
}
if(endIt != it){
int rangeFirstIndex = rangeFirstIt - numList.cbegin();
int rangeLastIndex = preIt - numList.cbegin();
std::map<int, std::vector<std::pair<int, int>>>::iterator rangeEntryIt =
rangeMap.find(*rangeFirstIt);
if(rangeMap.end() != rangeEntryIt){
rangeEntryIt->second.emplace_back(rangeFirstIndex, rangeLastIndex);
}
else{
rangeMap.emplace(*rangeFirstIt, std::vector<std::pair<int, int>>{
{rangeFirstIndex,rangeLastIndex}});
}
leftCountList.push_back(lessThanOrEqualCountFromRage(*it, numList,
rangeMap));
rangeFirstIt = it;
++it;
}
}
return leftCountList;
}
int main(int , char *[]){
std::vector<int> numList{2, 1, 3, 4, 7, 6, 5, 8, 9, 10, 12,
0, 11, 13, 8, 9, 12, 20, 30, 60};
std::vector<unsigned> countList = lessThanOrEqualCount(numList);
std::copy(countList.cbegin(), countList.cend(),
std::ostream_iterator<unsigned>(std::cout, ", "));
std::cout<< '\n';
}
Output:
0, 0, 2, 3, 4, 4, 4, 7, 8, 9, 10, 0, 11, 13, 9, 11, 15, 17, 18, 19,
Yes, It can be done in better time complexity compared to O(N^2) i.e O(NlogN). We can use the Divide and Conquer Algorithm and Tree concept.
want to see the source code of above mentioned two algorithms???
Visit Here .
I think O(N^2) should be the worst case. In this situation, we will have to traverse the array at least two times.
I have tried in O(N^2):
import java.io.*;
import java.lang.*;
public class GFG {
public static void main (String[] args) {
int a[]={1,2,1,1,2};
int i=0;
int count=0;
int b[]=new int[a.length];
for(i=0;i<a.length;i++)
{
for(int c=0;c<i;c++)
{
if(a[i]>=a[c])
{
count++;
}
}
b[i]=count;
count=0;
}
for(int j=0;j<b.length;j++)
System.out.print(b[j]+" ");
}`

How can I write this algorithm that returns the count between x and y in a list?

I am given this algorithmic problem, and need to find a way to return the count in a list S and another list L that is between some variable x and some variable y, inclusive, that runs in O(1) time:
I've issued a challenge against Jack. He will submit a list of his favorite years (from 0 to 2020). If Jack really likes a year,
he may list it multiple times. Since Jack comes up with this list on the fly, it is in no
particular order. Specifically, the list is not sorted, nor do years that appear in the list
multiple times appear next to each other in the list.
I will also submit such a list of years.
I then will ask Jack to pick a random year between 0 and 2020. Suppose Jack picks the year x.
At the same time, I will also then pick a random year between 0 and 2020. Suppose I
pick the year y. Without loss of generality, suppose that x ≤ y.
Once x and y are picked, Jack and I get a very short amount of time (perhaps 5
seconds) to decide if we want to re-do the process of selecting x and y.
If no one asks for a re-do, then we count the number of entries in Jack's list that are
between x and y inclusively and the number of entries in my list that are between x and
y inclusively.
More technically, here is the situation. You are given lists S and L of m and n integers,
respectively, in the range [0, k], representing the collections of years selected by Jack and
I. You may preprocess S and L in O(m+n+k) time. You must then give an algorithm
that runs in O(1) time – so that I can decide if I need to ask for a re-do – that solves the
following problem:
Input: Two integers, x as a member of [0,k] and y as a member of [0,k]
Output: the number of entries in S in the range [x, y], and the number of entries in L in [x, y].
For example, suppose S = {3, 1, 9, 2, 2, 3, 4}. Given x = 2 and y = 3, the returned count
would be 4.
I would prefer pseudocode; it helps me understand the problem a bit easier.
Implementing the approach of user3386109 taking care of edge case of x = 0.
user3386109 : Make a histogram, and then compute the accumulated sum for each entry in the histogram. Suppose S={3,1,9,2,2,3,4} and k is 9. The histogram is H={0,1,2,2,1,0,0,0,0,1}. After accumulating, H={0,1,3,5,6,6,6,6,6,7}. Given x=2 and y=3, the count is H[y] - H[x-1] = H[3] - H[1] = 5 - 1 = 4. Of course, x=0 is a corner case that has to be handled.
# INPUT
S = [3, 1, 9, 2, 2, 3, 4]
L = [2, 9, 4, 6, 8, 5, 3]
k = 9
x = 2
y = 3
# Histogram for S
S_hist = [0]*(k+1)
for element in S:
S_hist[element] = S_hist[element] + 1
# Storing prefix sum in S_hist
sum = S_hist[0]
for index in range(1,k+1):
sum = sum + S_hist[index]
S_hist[index] = sum
# Similar approach for L
# Histogram for L
L_hist = [0] * (k+1)
for element in L:
L_hist[element] = L_hist[element] + 1
# Stroing prefix sum in L_hist
sum = L_hist[0]
for index in range(1,k+1):
sum = sum + L_hist[index]
L_hist[index] = sum
# Finding number of elements between x and y (inclusive) in S
print("number of elements between x and y (inclusive) in S:")
if(x == 0):
print(S_hist[y])
else:
print(S_hist[y] - S_hist[x-1])
# Finding number of elements between x and y (inclusive) in S
print("number of elements between x and y (inclusive) in L:")
if(x == 0):
print(L_hist[y])
else:
print(L_hist[y] - L_hist[x-1])

Mathematically rotate an array of ordered numbers

Suppose you have a set of numbers in a given domain, for example: [-4,4]
Also suppose that this set of numbers is in an array, and in numerical order, like so:
[-4, -3 -2, -1, 0, 1, 2, 3, 4]
Now suppose I would like to create a new zero-point for this set of numbers, like so: (I select -2 to be my new axis, and all elements are shifted accordingly)
Original: [-4, -3 -2, -1, 0, 1, 2, 3, 4]
Zeroed: [-2, -1 0, 1, 2, 3, 4, -4, -3]
With the new zeroed array, lets say I have a function called:
"int getElementRelativeToZeroPosition(int zeroPos, int valueFromOriginalArray, int startDomain, int endDomain) {...}"
with example usage:
I am given 3 of the original array, and would like to see where it mapped to on the zeroed array, with the zero on -2.
getElementRelativeToZeroPosition(-2, 3, -4, 4) = -4
Without having to create any arrays and move elements around for this mapping, how would I mathematically produce the desired result of the function above?
I would proceed this way:
Get index of original zero position
Get index of new zero position (ie. index of -2 in you example)
Get index of searched position (index of 3)
Compute move vector between new and original zero position
Apply move vector to searched position modulo the array size to perform the rotation
Provided your array is zero-based:
index(0) => 4
index(-2) => 2
index(3) => 7
array_size => 9
move_vector => index(0) - index(-2)
=> 4 - 2 => +2
new_pos(3) => (index(3) + move_vector) modulo array_size
=> (7 + 2) mod 9 => 0
value_at(0) => -4
That's it
Mathematically speaking, if you have an implicit set of integers given by an inclusive range [start, stop], the choice of choosing a new zero point is really a choosing of an index to start at. After you compute this index, you can compute the index of your query point (in the original domain), and find the difference between them to get the offset:
For example:
Given: range [-4, 4], assume zero-indexed array (0,...,8) corresponding to values in the range
length(range) = 4 - (-4) + 1= 9
Choose new 'zero point' of -2.
Index of -2 is -2 - (-4) = -2 + 4 = 2
Query for position of 3:
Index in original range: 3 - (-4) = 3 + 4 = 7
Find offset of 3 in zeroed array:
This is the difference between the indices in the original array
7 - 2 = 5, so the element 3 is five hops away from element -2. Equivalently, it's 5-len(range) = 5 - 9 = -4 hops away. You can take the min(abs(5), abs(-4)) to see which one you'd prefer to take.
you can write a doubled linked list, with a head-node which points to the beginning
struct nodeItem
{
nodeItem* pev = nullptr;
nodeItem* next = nullptr;
int value = 0;
}
class Node
{
private:
nodeItem* head;
public:
void SetHeadToValue(int value);
...
}
The last value should point with next to the first one, so you have a circular list.
To figur out, if you are at the end of the list, you have to check if the item is equal to the head node

Equal - depth binning- whether it is just grouping data into k groups

A small confusion on equal - depth or equal frequency binning
Equal depth binning says that - It divides the range into N intervals, each containing approximately same number of samples
Lets take a small portion of iris data
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
If I need to bin my 1st column, what will be the results?
Whether it is just grouping the data or it includes some calculation like equal width binning.
What happens if number of elements to be binned is an odd number. How will I bin equally?
like #Anony-Mousse mentions, it is not always possible to exactly get the same number of samples in a bin, approximately is what is desired.
I will walk you through the case when unique(N)/bins > 0, where N represents the values in an array to be binned. Assume
N = [1, 1, 1, 1, 1, 1,
2, 3, 4, 5,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
bins = 4
here, length(N) = 20 and length(unique(N)) = 6, making unique(N)/bins = 1.5 > 0. Which means every bin will have approximately 1.5 samples. So you will put 1 in bin1, carrying over the 0.5 residue to the next bin, making the number of elements in that bin to 1.5 + 0.5 = 2, so 2 and 3 will be in bin2. Extrapolating this logic the final bins will have the following split. [1], [2,3], [4], [5,6] of course 1 repeats 6 times and 6 repeats 10 times.
I would not like the ties to sit in separate bins, that is usually the point in having bins (grouping values close to one another).
For cases with unique(N)/bins < 0, the same logic can be applied. Hope this answers your question.
Sometimes you cannot make bins of exactly the same size.
For example, if your data is
1,1,1,2,99
and you want 4 bins, then the most intuitive result should be
[1,1,1], [2], [], [99]
Most tools will produce one of these answers:
[1,1,1], [], [2], [99]
[1,1], [1], [2], [99]
[1], [1], [1], [2,99]
None of them have exactly 1.25 elements in every bin. The two last solutions are closest, but also the least intuitive. That is why one only demands "approximately the same number". Sometimes, there is no good solution that exactly has this frequency.

Ascending subsequences in permutation

With given permutation 1...n for example 5 3 4 1 2
how to find all ascending subsequences of length 3 in linear time ?
Is it possible to find other ascending subsequences of length X ? X
I don't have idea how to solve it in linear time.
Do you need the actual ascending sequences? Or just the number of ascending subsequences?
It isn't possible to generate them all in less than the time it takes to list them. Which, as has been pointed out, is O(NX / (X-1)!). (There is a possibly unexpected factor of X because it takes time O(X) to list a data structure of size X.) The obvious recursive search for them scales not far from that.
However counting them can be done in time O(X * N2) if you use dynamic programming. Here is Python for that.
counts = []
answer = 0
for i in range(len(perm)):
inner_counts = [0 for k in range(X)]
inner_counts[0] = 1
for j in range(i):
if perm[j] < perm[i]:
for k in range(1, X):
inner_counts[k] += counts[j][k-1]
counts.add(inner_counts)
answer += inner_counts[-1]
For your example 3 5 1 2 4 6 and X = 3 you will wind up with:
counts = [
[1, 0, 0],
[1, 1, 0],
[1, 0, 0],
[1, 1, 0],
[1, 3, 1],
[1, 5, 5]
]
answer = 6
(You only found 5 above, the missing one is 2 4 6.)
It isn't hard to extend this answer to create a data structure that makes it easy to list them directly, to find a random one, etc.
You can't find all ascending subsequences on linear time because there may be much more subsequences than that.
For instance in a sorted original sequence all subsets are increasing subsequences, so a sorted sequence of of length N (1,2,...,N) has N choose k = n!/(n-k)!k! increasing subsequences of length k.