breaking permutation of 1 to n number in 2 set [closed] - c++

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
problem link : https://codeforces.com/contest/1295/problem/E
this problem states that there is permutation of 1 to n number such that every number occur only once
eg. p = [ 1,2,3] or [2,1,3] for n = 3.
now for every permutation in p there is a cost ci for the ith element of permutation p. whenever I move the ith element form one set to another i have to pay ci.
So, the question asks me to break the permutation in two set at any position k such that 1 <= k < n. Basically no set should be empty. Now the condition is that each element of set1 = (1 to k) should be less from set2 = (k+1, n). so to do so i have two operation , either i can move an element from set1 to set2 or vice versa. performin this operation on ith element wold cost me the ci amount. If either set1 or set2 is empty the condition is met.
eg
p = [3,1,2] c = [7,1,4].
set1 = [3,1] & set2 = [2].
so now we can send 2 from set2 to set1 with minimum cost of 4.
for more example please refer to problem. Link is above.
my approach :
for any k we need to have 1 to k element in set1 and rest in set2.
So lets start from i = 1 and keep on moving till i = n-1 && at every position I maintain the pfx ans.
To calculate the pfx array at every pos = i if the p[i] > i then we need to add the cost because we have to transfer the element to set2 and if it is less than i then we need to substract because we wanted it before so we had it in our total cost. but now we don't want it to be in our cost since we have the value. similarly I have also calculated for every position if I had the cur value or not.
here is my code. It passes many test cases but keeps on failing at 10th. cant understand the reason.
help please.
#include <bits/stdc++.h>
#define ll unsigned long long
#define pii pair<int,int>
using namespace std;
int main(void)
{
ll n , ans, prc3=0;
cin>>n;
vector<ll> p(n+1),a(n+1);
for(ll i=1; i<=n; i++) cin>>p[i];
for(ll i=1; i<=n; i++) cin>>a[p[i]], prc3 += a[p[i]];
ll prc1=0, prc2=0;
ll ans1=INT_MAX, ans2=INT_MAX, ans3=INT_MAX;
vector<bool> vst(n+1,false);
for(ll i=1; i<n; i++)
{
if(p[i]>i)
{
prc1 += a[p[i]];
}
else if(p[i]<i)
{
prc1 -= a[p[i]];
}
if(vst[i])
{
prc1 -= a[i];
}
vst[p[i]] = true;
if(!vst[i])
{
prc1 += a[i];
}
ans1 = min(ans1,prc1);
prc2 += a[p[i]];
ans2 = min(ans2,prc2);
prc3 -= a[p[i]];
ans3 = min(ans3,prc3);
}
ans = min(ans1,ans2);
ans = min(ans,ans3);
cout << ans << endl;
return 0;
}

I'm not sure what your code is attempting to do exactly, but from struggling with this question myself and finally reading the editorial, it seems to me that achieving a solution within the constraints requires a tree data structure.
As I understand, we keep a variable, k, that represents the highest number in the left side. We start k at 1 and we iterate up to n. At the same time, we keep a tree, where each key, i, points to the cost of creating the current k partition starting at position i.
When we increment k, all the is that are the partition starting-positions greater than or equal to the position of k in the starting list will not need that element moved so we update all those is, subtracting A[p[k]] from each one in O(log n) time using the tree; and update all the is that point to partition starting-positions less than k, adding the cost of that move to each one. We take the minimum cost on each iteration.
As far as I know, updates on a segment in O(log n) time would require a tree data structure.

Related

Time Limit Exceeded error when submitting c++ code on Leetcode

I am solving a leetcode problem, the code works fine when I ran it, but when I submitted the code I got a Time Limit Exceeded error. I double checked the code but didnt find any infinite loop. Can anyone please take a look for me.
Below is the leetcode problem description:
We have an array A of integers, and an array queries of queries.
For the i-th query val = queries[i][0], index = queries[i][1], we add val to A[index]. Then, the answer to the i-th query is the sum of the even values of A.
(Here, the given index = queries[i][1] is a 0-based index, and each query permanently modifies the array A.)
Return the answer to all queries. Your answer array should have answer[i] as the answer to the i-th query.
Example 1:
Input: A = [1,2,3,4], queries = [[1,0],[-3,1],[-4,0],[2,3]]
Output: [8,6,2,4]
Explanation:
At the beginning, the array is [1,2,3,4].
After adding 1 to A[0], the array is [2,2,3,4], and the sum of even values is 2 + 2 + 4 = 8.
After adding -3 to A[1], the array is [2,-1,3,4], and the sum of even values is 2 + 4 = 6.
After adding -4 to A[0], the array is [-2,-1,3,4], and the sum of even values is -2 + 4 = 2.
After adding 2 to A[3], the array is [-2,-1,3,6], and the sum of even values is -2 + 6 = 4.
class Solution {
public:
vector<int> sumEvenAfterQueries(vector<int>& A, vector<vector<int>>& queries) {
vector<int> B;
for(int i = 0; i < queries.size(); i++)
{
int index = queries[i][1];
A[index] = A[index] + queries[i][0];
int sum = 0;
for(int j = 0; j < A.size(); j++)
{
if(A[j]%2 == 0)
{
sum = sum + A[j];
}
}
B.push_back(sum);
}
return B;
}
};
You are likely exceeding the time limit because your algorithm is naive. By recomputing the sum for every query, your program has a time complexity of O(M * N), where M is the size of the array, and N is the number of queries.
It's almost a guarantee that the test set will be designed to fail (by exceeding time limit) on a naive implementation.
There is absolutely no need to recompute the sum every time. You only need to compute it once.
After that, every time you have a query, you just need to update the current sum using only what changed. Use your program's knowledge of the previous and new values (i.e. part of the sum or not) when updating.
By doing this, your program's time complexity becomes O(M + N).

Lower time complexity of two for loop and optimize this to become 1 for loop [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to optimize this loop. Its time complexity is n2. I want something like n or log(n).
for (int i = 1; i <= n; i++) {
for (int j = i+1; j <= n; j++) {
if (a[i] != a[j] && a[a[i]] == a[a[j]]) {
x = 1;
break;
}
}
}
The a[i] satisfy 1 <= a[i] <= n.
This is what I will try :
Let us call B the image by a[], i.e. the set {a[i]}: B = {b[k]; k = 1..K, such that i exists, a[i] = b[k]}
For each b[k] value, k = 1..K, determine the set Ck = {i; a[i] = b[k]}.
Determinate of B and the Ck could be done in linear time.
Then let us examine the sets Ck one by one.
If Card(Ck} = 1 : k++
If Card(Ck) > 1 : if two elements of Ck are elements of B, then x = 1 ; else k++
I will use a table (std::vector<bool>) to memorize if an element of 1..N belongs to B or not.
I hope not having made a mistake. No time to write a programme just now. I could do it later on, but I guess you will be able to do it easily.
Note: I discovered after sending this answer that #Mike Borkland proposed something similar already in a comment...
Since sometimes you need to see a solution to learn, I'm providing you with a small function that does the job you want. I hope it helps.
#define MIN 1
#define MAX 100000 // 10^5
int seek (int *arr, int arr_size)
{
if(arr_size > MAX || arr_size < MIN || MIN < 1)
return 0;
unsigned char seen[arr_size];
unsigned char indices[arr_size];
memset(seen, 0, arr_size);
memset(indices, 0, arr_size);
for(int i = 0; i < arr_size; i++)
{
if (arr[i] <= MAX && arr[i] >= MIN && !indices[arr[i]] && seen[arr[arr[i]]])
return 1;
else
{
seen[arr[arr[i]]] = 1;
indices[arr[i]] = 1;
}
}
return 0;
}
Ok, how and why this works? First, let's take a look at the problem the one the original algorithm is trying to solve; they say half of the solution is a well-stated problem. The problem is to find if in a given integer array A of size n whose elements are bound between one and n ([1,n]) there exist two elements in A, x and y such that x != y and Ax = Ay (the array at the index x and y, respectively). Furthermore, we are seeking for an algorithm with good time complexity so that for n = 10000 the implementation runs within one second.
To begin with, let's start analyzing the problem. In the worst case scenario, the array needs to be completely scanned at least one time to decide if such pair of elements exist within the array. So, we can't do better than O(n). But, how would you do that? One possible way is to scan the array and record if a given index has appeared, this can be done in another array B (of size n); likewise, record if a given number that corresponds to A at the index of the scanned element has appeared, this can also be done in another array C. If while scanning the current element of the array has not appeared as an index and it has appeared as an element, then return yes. I have to say that this is a "classical trick" of using hash-table-like data structures.
The original tasks were: i) to reduce the time complexity (from O(n^2)), and ii) to make sure the implementation runs within a second for an array of size 10000. The proposed algorithm runs in O(n) time and space complexity. I tested with random arrays and it seems the implementation does its job much faster than required.
Edit: My original answer wasn't very useful, thanks for pointing that out. After checking the comments, I figured the code could help a bit.
Edit 2: I also added the explanation on how it works so it might be useful. I hope it helps :)
I want to optimize this loop. Its time complexity is n2. I want something like n or log(n).
Well, the easiest thing is to sort the array first. That's O(n log(n)), and then a linear scan looking for two adjacent elements is also O(n), so the dominant complexity is unchanged at O(n log(n)).
You know how to use std::sort, right? And you know the complexity is O(n log(n))?
And you can figure out how to call std::adjacent_find, and you can see that the complexity must be linear?
The best possible complexity is linear time. This only allows us to make a constant number of linear traversals of the array. That means, if we need some lookup to determine for each element, whether we saw that value before - it needs to be constant time.
Do you know any data structures with constant time insertion and lookups? If so, can you write a simple one-pass loop?
Hint: std::unordered_set is the general solution for constant-time membership tests, and Damien's suggestion of std::vector<bool> is potentially more efficient for your particular case.

Please tell me the efficient algorithm of Range Mex Query

I have a question about this problem.
Question
You are given a sequence a[0], a 1],..., a[N-1], and set of range (l[i], r[i]) (0 <= i <= Q - 1).
Calculate mex(a[l[i]], a[l[i] + 1],..., a[r[i] - 1]) for all (l[i], r[i]).
The function mex is minimum excluded value.
Wikipedia Page of mex function
You can assume that N <= 100000, Q <= 100000, and a[i] <= 100000.
O(N * (r[i] - l[i]) log(r[i] - l[i]) ) algorithm is obvious, but it is not efficient.
My Current Approach
#include <bits/stdc++.h>
using namespace std;
int N, Q, a[100009], l, r;
int main() {
cin >> N >> Q;
for(int i = 0; i < N; i++) cin >> a[i];
for(int i = 0; i < Q; i++) {
cin >> l >> r;
set<int> s;
for(int j = l; j < r; j++) s.insert(a[i]);
int ret = 0;
while(s.count(ret)) ret++;
cout << ret << endl;
}
return 0;
}
Please tell me how to solve.
EDIT: O(N^2) is slow. Please tell me more fast algorithm.
Here's an O((Q + N) log N) solution:
Let's iterate over all positions in the array from left to right and store the last occurrences for each value in a segment tree (the segment tree should store the minimum in each node).
After adding the i-th number, we can answer all queries with the right border equal to i.
The answer is the smallest value x such that last[x] < l. We can find by going down the segment tree starting from the root (if the minimum in the left child is smaller than l, we go there. Otherwise, we go to the right child).
That's it.
Here is some pseudocode:
tree = new SegmentTree() // A minimum segment tree with -1 in each position
for i = 0 .. n - 1
tree.put(a[i], i)
for all queries with r = i
ans for this query = tree.findFirstSmaller(l)
The find smaller function goes like this:
int findFirstSmaller(node, value)
if node.isLeaf()
return node.position()
if node.leftChild.minimum < value
return findFirstSmaller(node.leftChild, value)
return findFirstSmaller(node.rightChild)
This solution is rather easy to code (all you need is a point update and the findFisrtSmaller function shown above and I'm sure that it's fast enough for the given constraints.
Let's process both our queries and our elements in a left-to-right manner, something like
for (int i = 0; i < N; ++i) {
// 1. Add a[i] to all internal data structures
// 2. Calculate answers for all queries q such that r[q] == i
}
Here we have O(N) iterations of this loop and we want to do both update of the data structure and query the answer for suffix of currently processed part in o(N) time.
Let's use the array contains[i][j] which has 1 if suffix starting at the position i contains number j and 0 otherwise. Consider also that we have calculated prefix sums for each contains[i] separately. In this case we could answer each particular suffix query in O(log N) time using binary search: we should just find the first zero in the corresponding contains[l[i]] array which is exactly the first position where the partial sum is equal to index, and not to index + 1. Unfortunately, such arrays would take O(N^2) space and need O(N^2) time for each update.
So, we have to optimize. Let's build a 2-dimensional range tree with "sum query" and "assignment" range operations. In such tree we can query sum on any sub-rectangle and assign the same value to all the elements of any sub-rectangle in O(log^2 N) time, which allows us to do the update in O(log^2 N) time and queries in O(log^3 N) time, giving the time complexity O(Nlog^2 N + Qlog^3 N). The space complexity O((N + Q)log^2 N) (and the same time for initialization of the arrays) is achieved using lazy initialization.
UP: Let's revise how the query works in range trees with "sum". For 1-dimensional tree (to not make this answer too long), it's something like this:
class Tree
{
int l, r; // begin and end of the interval represented by this vertex
int sum; // already calculated sum
int overriden; // value of override or special constant
Tree *left, *right; // pointers to children
}
// returns sum of the part of this subtree that lies between from and to
int Tree::get(int from, int to)
{
if (from > r || to < l) // no intersection
{
return 0;
}
if (l <= from && to <= r) // whole subtree lies within the interval
{
return sum;
}
if (overriden != NO_OVERRIDE) // should push override to children
{
left->overriden = right->overriden = overriden;
left->sum = right->sum = (r - l) / 2 * overriden;
overriden = NO_OVERRIDE;
}
return left->get(from, to) + right->get(from, to); // split to 2 queries
}
Given that in our particular case all queries to the tree are prefix sum queries, from is always equal to 0, so, one of the calls to children always return a trivial answer (0 or already computed sum). So, instead of doing O(log N) queries to the 2-dimensional tree in the binary search algorithm, we could implement an ad-hoc procedure for search, very similar to this get query. It should first get the value of the left child (which takes O(1) since it's already calculated), then check if the node we're looking for is to the left (this sum is less than number of leafs in the left subtree) and go to the left or to the right based on this information. This approach will further optimize the query to O(log^2 N) time (since it's one tree operation now), giving the resulting complexity of O((N + Q)log^2 N)) both time and space.
Not sure this solution is fast enough for both Q and N up to 10^5, but it may probably be further optimized.

Given a sorted array and a parameter k, find the count of sum of two numbers greater than or equal to k in linear time

I am trying to find all pairs in an array with sum equal to k. My current solution takes O(n*log(n)) time (code snippet below).Can anybody help me in finding a better solution, O(n) or O(lgn) may be (if it exists)
map<int,int> mymap;
map<int,int>::iterator it;
cin>>n>>k;
for( int i = 0 ; i < n ; i++ ){
cin>>a;
if( mymap.find(a) != mymap.end() )
mymap[a]++;
else
mymap[a] = 1;
}
for( it = mymap.begin() ; it != mymap.end() ; it++ ){
int val = it->first;
if( mymap.find(k-val) != mymap.end() ){
cnt += min( it->second, mymap.find(k-val)->second );
it->second = 0;
}
}
cout<<cnt;
Another aproach which will take O(log n) in the best case and O(nlog n) in the worst one for positive numbers can be done in this way:
Find element in array that is equal to k/2 or if it doesn’t exist than finds the minimum greater then k/2. All combinations with this element and all greater elements will be interested for us because p + s >= k when p>= k/2 and s>=k/2. Array is sorted, so binary search with some modifications can be used. This step will take O(log n) time.
All elements which are less then k/2 + elements greater or equal to "mirror elements" (according to median k/2) will also be interested for us because p + s >= k when p=k/2-t and s>= k/2+t. Here we need to loop through elements less then k/2 and find their mirror elements (binary search). The loop should be stopped if mirror element is greater then the last array.
For instance we have array {1,3,5,8,11} and k = 10, so on the first step we will have k/2 = 5 and pairs {5,7}, {8,11}, {8, 11}. The count of these pairs will be calculated by formula l * (l - 1)/2 where l = count of elements >= k/2. In our case l = 3, so count = 3*2/2=3.
On the second step for 3 number a mirror element will be 7 (5-2=3 and 5+2=7), so pairs {3, 8} and {3, 11} will be interested. For 1 number mirror will be 9 (5-4=1 and 5+4=9), so {1, 11} is what we look for.
So, if k/2 < first array element this algorithm will be O(log n).
For negative the algorithm will be a little bit more complex but can be solved also with the same complexity.
There exists a rather simple O(n) approach using the so-called "two pointers" or "two iterators" approach. The key idea is to have two iterators (not necessarily C++ iterators, indices would do too) running on the same array so that if first iterator points to value x, then the second iterator points to the maximal element in the array that is less then k-x.
We will be increasing the first iterator, and while doing this we'll also change the second iterator to maintain this property. Note that as the first pointer increases, the corresponding position of the second pointer will only decrease, so on every iteration we can start from the position where we stopped at the previous iteration; we will never need to increase the second pointer. This is how we achieve O(n) time.
Code is like this (did not test this, but the idea should be clear)
vector<int> a; // the given array
int r = a.size() - 1;
for (int l=0; l<a.size(); l++) {
while ((r >= 0) && (a[r] >= k - a[l]))
r--;
// now r is the maximal position in a so that a[r] < k - a[l]
// so all elements right to r form a needed pair with a[l]
ans += a.size() - r - 1; // this is how many pairs we have starting at l
}
Another approach which might be simpler to code, but a bit slower, is O(n log n) using binary search. For each element a[l] of the array, you can find the maximal position r so that a[r]<k-a[l] using binary search (this is the same r as in the first algorithm).
#Drew Dormann - thanks for the remark.
Run through the array with two pointers. left and right.
Assuming left is the small side, start with left at location 0 and then right moves towards left until a[left]+a[right] >= k for the last time.
When this is achieved, then total_count += (a.size - right + 1).
You then move left one step forwards and right needs to (maybe) move towards it. Repeat this until they meet.
When this is done, and let us say they met at location x, then totla_count += choose(2, a.size - x).
Sort the array (n log n)
for (i = 1 to n)
Start at the root
if a[i] + curr_node >= k, go left and match = indexof(curr_nod)e
else, go right
If curr_node = leaf node, add all nodes after a[match] to the list of valid pairs with a[i]
Step 2 also takes O(n log n). The for loop runs n times. Within the loop, we perform a binary search for each node i.e. log n steps. Hence the overall complexity of the algorithm is O (n log n).
This should do the work:
void count(int A[], int n) //n being the number of terms in array
{ int i, j, k, count = 0;
cin>>k;
for(i = 0; i<n; i++)
for(j = 0; j<n; j++)
if(A[i] + A[j] >= k)
count++ ;
cout<<"There are "<<count<<" such numbers" ;
}

Array balancing point

What is the best way to solve this?
A balancing point of an N-element array A is an index i such that all elements on lower indexes have values <= A[i] and all elements on higher indexes have values higher or equal A[i].
For example, given:
A[0]=4 A[1]=2 A[2]=7 A[3]=11 A[4]=9
one of the correct solutions is: 2. All elements below A[2] is less than A[2], all elements after A[2] is more than A[2].
One solution that appeared to my mind is O(nsquare) solution. Is there any better solution?
Start by assuming A[0] is a pole. Then start walking the array; comparing each element A[i] in turn against A[0], and also tracking the current maximum.
As soon as you find an i such that A[i] < A[0], you know that A[0] can no longer be a pole, and by extension, neither can any of the elements up to and including A[i]. So now continue walking until you find the next value that's bigger than the current maximum. This then becomes the new proposed pole.
Thus, an O(n) solution!
In code:
int i_pole = 0;
int i_max = 0;
bool have_pole = true;
for (int i = 1; i < N; i++)
{
if (A[i] < A[i_pole])
{
have_pole = false;
}
if (A[i] > A[i_max])
{
i_max = i;
if (!have_pole)
{
i_pole = i;
}
have_pole = true;
}
}
If you want to know where all the poles are, an O(n log n) solution would be to create a sorted copy of the array, and look to see where you get matching values.
EDIT: Sorry, but this doesn't actually work. One counterexample is [2, 5, 3, 1, 4].
Make two auxiliary arrays, each with as many elements as the input array, called MIN and MAX.
Each element M of MAX contains the maximum of all the elements in the input from 0..M. Each element M of MIN contains the minimum of all the elements in the input from M..N-1.
For each element M of the input array, compare its value to the corresponding values in MIN and MAX. If INPUT[M] == MIN[M] and INPUT[M] == MAX[M] then M is a balancing point.
Building MIN takes N steps, and so does MAX. Testing the array then takes N more steps. This solution has O(N) complexity and finds all balancing points. In the case of sorted input every element is a balancing point.
Create a double-linked list such as i-th node of this list contains A[i] and i. Traverse this list while elements grow (counting maximum of these elements). If some A[bad] < maxSoFar it can't be MP. Remove it and go backward removing elements until you find A[good] < A[bad] or reach the head of the list. Continue (starting with maxSoFar as maximum) until you reach end of the list. Every element in result list is MP and every MP is in this list. Complexity is O(n) since is maximum of steps is performed for descending array - n steps forward and n removals.
Update
Oh my, I confused "any" with "every" in problem definition :).
You can combine bmcnett's and Oli's answers to find all the poles as quickly as possible.
std::vector<int> i_poles;
i_poles.push_back(0);
int i_max = 0;
for (int i = 1; i < N; i++)
{
while (!i_poles.empty() && A[i] < A[i_poles.back()])
{
i_poles.pop_back();
}
if (A[i] >= A[i_max])
{
i_poles.push_back(i);
}
}
You could use an array preallocated to size N if you wanted to avoid reallocations.