This question already has answers here:
Number of all increasing subsequences in given sequence?
(7 answers)
Closed 8 years ago.
Given an array A of size N I need to count such triplets (i,j,k) such that:
Condition 1 : i < j < k
Condition 2 : A[i] > A[j] > A[k]
I know a O(N^3) solution to do it. Can their be something like O(N) or O(NlogN) solution to do this problem as N can be up to 100000
Example : Let N=4 and array be [4,3,2,1] then answer is 4 as {4,3,2},{4,3,1},{4,2,1} and {3,2,1} are all possible answers
How to find this count for given N and array A?
My Approach :
int n;
cin>>n;
vector<int> A(n);
for(int i=0;i<n;i++){
cin>>A[i];
}
int count=0;
for(int i=0;i<n;i++){
for(int j=i+1;j<n;j++){
for(int k=j+1;k<n;k++){
if(A[i]>A[j] && A[j]>A[k]){
count++;
}
}
}
}
cout<<count<<"\n";
First, sort the array, maintain the index of each element.
class Node{
int index, val;
}
For comparing two nodes, we first need to compare their values. If the values equals, we will compare their index, consider a node is greater if its index is smaller.
Now, process each node in sorted order, we try to add each node's index into a Fenwick tree. So, for each index i, we query the tree for the frequency of this index, which added previously in the tree. This is the number of index that has value greater than value of the current index.
Note for the case elements have equal value, by the sorting mechanism mentioned above, we will add those have greater index to the tree first, thus, doesn't affect the frequency value query from the tree.
Apply similar step to obtains those elements that smaller than i and has index j < i.
For example:
If we have an array
{0(1) ,1(2) , 2(2) ,3(4) , 4(4) ,5(4) ,6(1)} //index(value)
After sort -> {5(4), 4(4), 3(4), 2(2), 1(2), 6(1), 0(1) }
Pseudo code
Node[]data;
sort(data)
Fenwick tree;
int[]less;
int[]more;
for(int i = 0; i < data.length; i++){
less[data[i].index] = tree.query(data[i].index);
tree.add(data[i].index, 1);
}
tree.clear();
for(int i = data.length - 1; i >= 0; i--){
more[data[i].index] = tree.query(data.length) -tree.query(data[i].index);
tree.add(data[i].index, 1);
}
int result = 0;
for(int i = 0; i < data.length; i++)
result += more[i]*less[i];
Time complexity will be O(n logn).
Working Java code (FT is my Fenwick tree)
PrintWriter out;
Scanner in = new Scanner(System.in);
out = new PrintWriter(System.out);
int n = in.nextInt();
Node[] data = new Node[n];
for (int i = 0; i < n; i++) {
data[i] = new Node(i + 1, in.nextInt());
}
FT tree = new FT(n + 2);
Arrays.sort(data, new Comparator<Node>() {
#Override
public int compare(Node o1, Node o2) {
if (o1.val != o2.val) {
return o2.val - o1.val;
}
return o2.index - o1.index;
}
});
int[] less = new int[n];//Store all nodes with greater index and smaller value;
int[] greater = new int[n];//Store all nodes with smaller index and greater value
for (int i = 0; i < n; i++) {
greater[data[i].index - 1] = (int) tree.get(data[i].index);
tree.update(data[i].index, 1);
}
tree = new FT(n + 2);
for (int i = n - 1; i >= 0; i--) {
less[data[i].index - 1] = (int) (tree.get(n) - tree.get(data[i].index));
tree.update(data[i].index, 1);
}
long total = 0;
for (int i = 0; i < n; i++) {
total += less[i] * greater[i];
}
out.println(total);
out.close();
You can do this in O(n*n) pretty easily, you just need to keep track of how many smaller number each element had:
vector<int> smallerNumbers(A.size());
for (int i = A.size() - 2; i >= 0; --i){
for (int j = i + 1; j < A.size(); ++j){
if (A[i] > A[j]){
smallerNumbers[i]++;
count += smallerNumbers[j];
}
}
}
For an O(nklogn) solution see my answer here: https://stackoverflow.com/a/28379003/2642059
Note that is for an increasing sequence and you're asking for a decreasing sequence.
To accomplish that you will need to reverse the ranking created by mapIndex. So simply reverse temp before creating mapIndex by swapping the partial_sort_copy line with this one:
partial_sort_copy(values.cbegin(), values.cend(), temp.rbegin(), temp.rend());
Related
This question already has answers here:
How to remove all instances of a duplicate from a vector<int> [duplicate]
(6 answers)
Closed 2 years ago.
I am trying to get the sum of unique elements, however I am not meeting the requirements of the given output.
//Prompted Input: [1,2,3,2]
//Expected output: 4
//Explanation: The unique elements are [1,3]
Below is my relevant code. Some things I have tried was to set j to i for the nested loop, however that changed nothing. The next step I took was to take out the first if conditional and have the code do the sum after finding the unique numbers but the output was 10. I'd be grateful if someone could give me a direction of where I'm messing up because I know I am close.
int sumOfUnique(vector<int>& nums) {
int sum = 0;
for(int i = 0; i < nums.size(); i++){
for(int j = 0; j < nums.size(); j++){
if(j == i){
sum += nums[i];
}
if(nums[i] == nums[j]){
break;
}
}
}
return sum;
}
You're close in that you have nested loops, but the content of the loops is not correct. The key is that you need to identify the unique elements, that's not something that your current code does.
Use the inner loop to identify if an element is unique and then after the inner loop add it to the sum if it is. Like this
int sumOfUnique(vector<int>& nums) {
int sum = 0;
for (int i = 0; i < nums.size(); i++) {
// count how many times nums[i] occurs
int count = 0;
for (int j = 0; j < nums.size(); j++)
if (nums[i] == nums[j])
++count;
if (count == 1) // is nums[i] unique?
sum += nums[i]; // add it to the sum if it is
}
return sum;
}
The trick is the extra variable count to work out if a particular number is unique.
You can make this code clearer and more flexible by putting the uniqueness test into it's own function. Like this
bool isUnique(vector<int>& nums, int i) {
// count how many times nums[i] occurs
int count = 0;
for (int j = 0; j < nums.size(); j++)
if (nums[i] == nums[j])
++count;
// return true if it occurs once only
return count == 1;
}
int sumOfUnique(vector<int>& nums) {
int sum = 0;
for (int i = 0; i < nums.size(); i++) {
if (isUnique(nums, i)) // is nums[i] unique?
sum += nums[i]; // add it to the sum if it is
}
return sum;
}
It's good to split code into different functions, with each function solving one part of the puzzle. Now (for instance) you could replace isUnique with a different function and sum values in your vector based on some different criterion.
There are more efficient solutions that this using std::set but I expect that the point of this exercise is to get you practising with loops and algorithms.
You can use std::map to create a frequency counter. After that, iterate through the map and check if a number only occurred once. If that's true, add that number to the result and afterward print out the final result.
int uniqueSum(vector<int> numbers)
{
map<int, int> frequency;
for (auto it = numbers.begin(); it!=numbers.end(); it++)
{
int value = (*it);
if (frequency.find(value) == frequency.end()) {frequency[value] = 1;}
else {frequency[value]++;}
//if value already occur in map then add 1 to its counter,
//else set its counter to 1
}
int sum = 0;
for (auto it = frequency.begin(); it!=frequency.end(); it++)
{
if (it->second == 1) {sum += it->first; }
//if the element appear just once in the vector, add it to sum, else skip it
}
return sum;
}
You can read more about map here: https://www.cplusplus.com/reference/map/map/
And also the find() function: https://www.cplusplus.com/reference/map/map/find/
Use a map to store numbers you have seen, and if they repeat, mark as not-viable.
int sumOfUnique(std::vector<int>& nums)
{
std::map<int, bool> seen;
for (auto i : nums)
{
auto it = seen.find(i);
if (it != seen.end()) // If already exists, set viability to false
{
it->second = false;
}
else { seen.insert({ i, true }); } // Does not exist, is currently viable
}
int sum = 0;
for (auto pair : seen)
{
if (pair.second) // If viable
{
sum += pair.first;
}
}
}
I had a test, and there was a problem I still can't solve.
Given an array of numbers, with EACH ELEMENT have at-most K swap allowed, and only adjacent swap, find the largest lexicographical order.
Ex:
Input
[7, 1, 2, 3, 4, 5, 6]
swapTime = 2
Output
[7, 3, 4, 1, 2, 6, 5]
At first I thought it was a modified BubbleSort, but it was not correct, any ideas?
Here's the pseudo code:
void findMaxNum(int num[], int swapTime) {
int table[n];
for(i=0; i<n; ++i)
table[i] = swapTime;
for(i=0; i<n-1; ++i)
for(j=0; j<n-i-1; ++j)
if(table[j]!=0 && num[j]<num[j+1]) {
swap(num[j], num[j+1]);
swap(table[j], table[j+1]);
table[j]--;
table[j+1]--;
}
}
You can do with with a max heap of size k+1 initially having the first k+1 values and a hash from each index to the leftmost legal element for that index (disregarding indices <= k).
Then we do the following for each index i in ascending order:
If hash[i] has a value, put it at i and remove it from the heap. If not, move the max elt from the heap to i and remove it from the hash. In either case, add the next elt from the array to the heap.
The hash guarantees that no element moves more than k to the right. The min heap selects the max legal element while guaranteeing that no element moves more than k to the left.
For lexicographical order you have to maximize the first letter, then the 2nd and so on. So you don't need to worry about using swaps on a digit as long as it helps improving the current position (going left to right) and doesn't exceed k. Here is a solution (also modyfing the input like your method):
public static void findMax(int[] num, int swapsPerElement) {
int[] swaps = new int[num.length];
for (int i = 0; i < num.length; i++) {
if (swaps[i] == swapsPerElement)
continue;
int best = i;
for (int j = i + 1; j < num.length && j - i <= swapsPerElement; j++) {
if (swaps[j] == swapsPerElement)
break; // cannot be swapped
if (num[best] < num[j] && swapsPerElement - swaps[j] >= j - i)
best = j;
}
for (int j = best; j > i; j--) { // swap
int t = swaps[j] + 1;
swaps[j] = swaps[j - 1] + 1;
swaps[j - 1] = t;
t = num[j];
num[j] = num[j - 1];
num[j - 1] = t;
}
}
}
Given n, the number of array elements and arr[n], the array of numbers, it is required to find the maximum number of sub-arrays the array can be divided into such that GCD(a,b)=1 for every a and b that belong to different sub-arrays.
Eg:
5
2 3 4 5 6
Ans: 2 ----> {(2,3,4,6),(5)}
Every other attempt to divide it further will not satisfy the conditions.
My Approach:
1. Sort the array.
2. Keep calculating the lcmof the elements.
3. Increase the counter every time the gcd of the element and lcm of elements before is 1.
int main()
{
int n;
cin>>n;
long long int arr[n];
for(int i=0;i<n;++i)
cin>>arr[i];
sort(arr,arr+n);
long long int ans=1,l=arr[n-1];
for(int i=n-2;i>=0;i--)
{
if(gcd(l,arr[i])==1)
ans++;
l=lcm(l,arr[i]);
}
cout<<ans<<endl;
return 0;
}
After my answer being judged wrong answer multiple times, I am confused whether my solution is correct. Since the limit for n was 10^6 and array element was 10^7, another reason the solution would have failed is that the LCM can exceed the long long limit. Is there any other solution possible? Or is there any mistake in the present approach?
I think this is the problem you are referring to: https://www.codechef.com/problems/CHEFGRUP
My approach is as follows (I got Time Limit Exceeded):
Step - 1: Calculate all the primes in the range [1, 10^7].
This can be done using Sieve of Eratosthenes and the complexity will be O(nlog(log(n)) where n can be upto 10^7.
Step - 2: Use the vector of primes calculated above to find prime factorization of all the numbers in the array.
This can be implemented very efficiently once we have all the required primes.
The point to note in this step is that, suppose we have 2 numbers whose prime factorization contains common prime numbers, then these two elements cannot be in different subarrays because then GCD won't be 1 (as required in the question). Hence, for all such pairs, they will have to be in the same subarray. How to achieve this?
Step - 3: Use Disjoint Set Data Structure.
We can create a disjoint set of all the prime numbers. So the number of sets in the beginning will be the number of prime numbers. Then, during each factorization, we will join all the prime numbers that is a divisor and add them all in the same group with the original number. This will be repeated for all the numbers.
Also, we will have to check once, whether some prime numbers was even required in the first place. Because before this step we just assumed that there are as many sets as the prime numbers in the range. But some might be unused. So, this can be checked by traversing a loop once and finding the number of unique representatives. This will be our answer.
My code:
#include <bits/stdc++.h>
using namespace std;
typedef long long int ll;
int prime[(int)1e7+10] = {0};
struct union_find {
std::vector <int> parent, rank;
// Constructor to initialse 'parent' and 'rank' vector.
union_find(int n) {
parent = std::vector <int> (n);
rank = std::vector <int> (n, 0); // initialse rank vector with 0.
for(int i = 0; i < n; i++)
parent[i] = i;
}
// Find with Path Compression Heuristic.
int find_(int a) {
if(a == parent[a])
return a;
return parent[a] = find_(parent[a]);
}
// Union by checking rank to keep the depth of the tree as shallow as possible.
void union_(int a, int b) {
int aa = find_(a), bb = find_(b);
if(rank[aa] < rank[bb])
parent[aa] = bb;
else
parent[bb] = aa;
if(rank[aa] == rank[bb])
++rank[aa];
}
};
union_find ds(1e7+10);
int main() {
int n;
int sq = sqrt(1e7+10);
for(int i = 4; i < 1e7+10; i += 2)
prime[i] = 1;
for(int i = 3; i <= sq; i += 2) {
if(!prime[i]) {
for(int j = i*i; j < 1e7+10; j += i)
prime[j] = 1;
}
}
vector <int> primes;
primes.push_back(2);
for(int i = 3; i < 1e7+10; i += 2) {
if(!prime[i])
primes.push_back(i);
}
scanf("%d", &n);
int a[n];
for(int i = 0; i < n; i++) {
scanf("%d", &a[i]);
}
for(int i = 0; i < n; i++) {
int temp = a[i];
// int sq = sqrt(temp);
vector <int> divisors;
for(int j = 0; j < primes.size(); j++) {
if(primes[j] > temp)
break;
if(temp % primes[j] == 0) {
divisors.push_back(primes[j]);
while(temp % primes[j] == 0) {
temp /= primes[j];
}
}
}
if(temp > 2)
divisors.push_back(temp);
for(int i = 1; i < divisors.size(); i++)
ds.union_(divisors[i], divisors[i-1]);
if(divisors.size() > 0)
ds.union_(divisors[0], a[i]);
}
set <int> unique;
for(int i = 0; i < n; i++) {
int x = ds.find_(a[i]);
unique.insert(x);
}
printf("%d\n", unique.size());
return 0;
}
This question already has answers here:
Find a pair of elements from an array whose sum equals a given number
(33 answers)
Closed 8 years ago.
So I'm trying to solve problem of finding two numbers from an array such that they add up to a specific target number.
The simplest way to solve it (it gives TimeLimit Error, because it takes O(n^2) time)
vector<int> res, temp = numbers;
sort(temp.begin(), temp.end());
for (int i = 0; i < numbers.size(); i++)
{
for (int j = i + 1; j < numbers.size(); j++)
{
if (numbers[i] + numbers[j] == target)
{
res.push_back(i + 1);
res.push_back(j + 1);
return res;
}
}
}
Also I've tried to sort array before find and then use two pointers (Now it takes O(n^2 log n) time but still gives me Time Limit Error)
vector<int> twoSum(vector<int> &numbers, int target) {
vector<int> res, temp = numbers;
sort(temp.begin(), temp.end());
int i = 0, j = numbers.size() - 1;
while (i < j)
{
if (temp[i] + temp[j] == target)
{
res.push_back(i);
res.push_back(j);
break;
}
if (temp[i] + temp[j] < target)
i++;
if (temp[i] + temp[j] > target)
j--;
}
for (int i = 0; i < numbers.size(); i++)
{
if (numbers[i] == temp[res[0]])
{
res[0] = i + 1;
break;
}
}
for (int i = 0; i < numbers.size(); i++)
{
if (numbers[i] == temp[res[1]])
{
res[1] = i + 1;
break;
}
}
return res;
}
So I would like to know how it is possible to solve this problem using only O(n) time?
I've heard somthing about hash and map but don't know what are they and how to use them.
The hash table approach is as follows: (using unordered_set in C++11)
Given a target sum S...
For each element x:
Check if S - x exists in the hash table - if so, we have our 2 numbers x and S - x.
Insert x into the hash table.
This runs in expected O(n) time.
Also, your approach is only O(n log n). That's O(n log n) for the sort and O(n) for each of the while loop and the two for loops, giving O(n log n + n) = O(n log n) in total. Well, that's assuming .size() is O(1) - I know it might be O(n) (giving O(n²) total running time), at least for older compilers.
Although I'm not too sure what the last two for loops are doing there - when you break from the while loop, you'll have your 2 numbers.
I'm trying to implement Radix sort with base 256 using Lists. The sort works fine but it takes to long to sort big arrays, in addition the complexity should be linear, O(n), but i'm not getting that result as i'm timing the sort in the output. Here is my code:
Insert Function:
//insert to the back of the list element pointed to by x
void insert(Item * ls, Item * x)
{
x->prev = ls->prev;
ls->prev->next=x;
x->next=ls;
ls->prev=x;
}
Delete Function:
//delete link in list whose address is x
void delete_x(Item * x)
{
x->prev->next = x->next;
x->next->prev = x->prev;
delete [] x;
}
Radix_Sort Function:
void radix_sort_256(unsigned int *arr,unsigned int length)
//Radix sort implementation with base 256
{
int num_of_digits=0,count=0,radix_num=0;
unsigned int base=0,largest=0;
Item List [256]; //Creating 256 Nodes ( Base 256 )
for(int j=0; j<256;j++) // Sentinel Init for each Node
{
List[j].key=0;
List[j].next=&List[j];
List[j].prev=&List[j];
}
for(unsigned int i=0; i<length ; i++) //Finding the largest number in the array
{
if(arr[i]>largest)
largest = arr[i];
}
while(largest != 0 ) //Finding the total number of digits in the bigest number( "largest" ) of the array.
{
num_of_digits++;
largest = largest >> 8;
}
for(int i=0; i<num_of_digits; i++)
{
Item *node;
for(unsigned int j=0; j<length; j++)
{
node = new Item; //Creating a new node(Total 256 nodes) and inserting numbers from the array to each node
node->next = NULL; // with his own index.
node->prev = NULL;
node->key = arr[j];
radix_num = ( arr[j] >> (8*i) ) & 0xFF;
insert(&List[radix_num],node);
}
for(int m=0 ; m<256 ; m++) //checking the list for keys // if key found inserting it to the array in the original order
{
while( List[m].next != &List[m] )
{
arr[count]=List[m].next->key;
delete_x(List[m].next); //deleting the Item after the insertion
count++;
}
}
count=0;
}
}
Main:
void main()
{
Random r;
int start,end;
srand((unsigned)time(NULL));
// Seting up dinamic array in growing sizes,
// filling the arrayes with random
for(unsigned int i=10000 ; i <= 1280000; i*=2)
{
// numbers from [0 to 2147483646] calling the radix
// sort function and timing the results
unsigned int *arr = new unsigned int [i];
for(int j=0 ; j<i ; j++)
{
arr[j] = r.Next()-1;
}
start = clock();
radix_sort_256(arr,i);
end = clock();
cout<<i;
cout<<" "<<end-start;
if(Sort_check(arr,i))
cout<<"\t\tArray is sorted"<<endl;
else
cout<<"\t\tArray not sorted"<<endl;
delete [] arr;
}
}
Can anyone see, maybe i'm doing some unnecessary actions that take great deal of time to execute?
Complexity is a difficult beast to master, because it is polymorphic.
When we speak about the complexity of an algorithm, we generally simplify it and express it according to what we think being the bottleneck operation.
For example, when evaluating sorting algorithms, the complexity is expressed as the number of comparisons; however, should your memory be a tape1 instead of RAM, the true bottleneck is the memory access and therefore a quicksort O(N log N) ends up being slower than a bubblesort O(N ** 2).
Here, your algorithm may be optimal, its implementation seems lacking: there is a lot of memory allocation/deallocation going on, for example. Therefore, it may well be that you did not identified the bottleneck operation correctly, and that all talk of linear complexity are moot since you are not measuring the right things.
1 because tapes take a time to move from one cell to another proportional to the distance between those cells, and thus a quicksort algorithms that keep jumping around memory ends up doing a lot of back and forth whilst a bubble sort algorithm just runs the length of the tape N times (max).
Radix sort with base 256 could easily look something like this.
void sort(int *a, int n)
{
int i, *b, exp = 1, max = 0;
for (i = 0; i < n; i++) {
if (a[i] > max)
max = a[i];
}
b = (int*)malloc(n * sizeof(int));
while (max / exp > 0) {
int box[256] = {0};
for (i = 0; i < n; i++)
box[a[i] / exp % 256]++;
for (i = 1; i < 256; i++)
box[i] += box[i - 1];
for (i = n - 1; i >= 0; i--)
b[--box[a[i] / exp % 256]] = a[i];
for (i = 0; i < n; i++)
a[i] = b[i];
exp *= 256;
}
free(b);
}