I have a few vectors.
I want to find all permutations of each vector.
It works reasonably well, when the values are unique but if there are reappearing values it messes up.
I have the following vectors
vector<string> present = {"Schaukelpferd","Schaukelpferd","Puppe","Puppe"};
vector<string> children = {"Jan","Tim","Alex","Daniel"};
vector<int> houses = {4,5,5,5};
I am sorting the before using next_permutation()
sort(present.begin(),present.end());
sort(children.begin(),children.end());
sort(houses.begin(),houses.end());
do {
present_perm.push_back(present);
} while (next_permutation(present.begin(), present.end()));
do {
children_perm.push_back(children);
} while (next_permutation(children.begin(), children.end()));
do {
houses_perm.push_back(houses);
} while (next_permutation(houses.begin(), houses.end()));
children works good, but present as well as houses doesn't work as expected
children returns 24 permutation, as expected, present returns only 6 and houses returns only 4. I would expect all to return 24 because all vectors have 4 elements (4! = 24).
Consider the four integer values 4, 5, 5, 5. The four possible permutations are 4, 5, 5, 5 and 5, 4, 5, 5 and 5, 5, 4, 5 and 5, 5, 5, 4. That's it. The three 5s have the same value, so they cannot be distinguished from each other. The algorithm doesn't keep track of which of those values originally came before the other; they're the same. The same thing applies to present: there are three distinct values, not four.
Related
Given an integer n and array a, I need to find for each i, 1≤ i ≤ n, how many elements on the left are less than or equal to ai
Example:
5
1 2 1 1 2
Output
0 1 1 2 4
I can do it in O(N2) but I want to ask if there is any way to do it faster, since N is very large (N ≤ 106)?
You can use a segment tree, you just need to use a modified version called a range tree.
Range trees allow rectangle queries, so you can make the dimensions be index and value, and ask "What has value more than x, and index between 1 and n?"
Queries can be accomplished in O(log n) assuming certain common optimizations.
Either way O(N^2) is completely fine with N < 10^6.
I like to consider a bigger array to explain, so let's consider following array,
2, 1, 3, 4, 7, 6, 5, 8, 9, 10, 12, 0, 11, 13, 8, 9, 12, 20, 30, 60
The naïve way is to compare an element with all elements at left of it. Naïve approach has complexity of O(n^2) which make it not useful for big array.
If you look this problem closely you will find a pattern in it, and the pattern is Rather than comparing with each left element of an element we can compare first and last value of a range!. Wait a minute what is the range here?
These numbers can be viewed as ranges and there ranges can be created from traversing left to right in array. Ranges are as follows,
[2], [1, 3, 4, 7], [6], [5, 8, 9, 10, 12], [0, 11, 13], [8, 9, 12, 20, 30, 60]
Let’s start traversing array from left to right and see how we can create these ranges and how these ranges shall reduce the effort to find all small or equal elements at left of an element.
Index 0 have no element at its left to compare thus why we start form index 1, at this point we don’t have any range. Now we compare value of index 1 and index 0. Value 1 is not less than or equals to 2, so this is very import comparison, due to this comparison we know the previous range should end here because now numbers are not in acceding order and at this point we get first range [2], which contains only single element and number of elements less than or equals to left of element at index 1 is zero.
As continue with traversing left to right at index 2 we compare it with previous element which is at index 1 now value 1 <= 3 it means a new range is not staring here and we are still in same range which started at index 1. So to find how many elements less than or equals, we have to calculate first how many elements in current range [1, 3), in this case only one element and we have only one know range [2] at this point and it has one element which is less than 3 so total number of less than or equals elements at the left of element at index 2 is = 1 + 1 = 2. This can be done in similar way for rest of elements and I would like to jump directly at index 6 which is number 5,
At index 6, we have all ready discovered three ranges [2], [1, 3, 4, 7], [6] but only two ranges [2] and [1, 3, 4, 7] shall be considered. How I know in advance that range [6] is not useful without comparing will be explained at the end of this explanation. To find number of less than or equals elements at left, we can see first range [2] have only one element and it is less than 5, second range have first element 1 which is less than 5 but last element is 7 and it is greater than 5, so we cannot consider all elements of range rather we have to find upper bound in this range to find how many elements we can consider and upper bound can be found by binary search because range is sorted , so this range contains three elements 1, 3, 4 which are less then or equals to 5. Total number of elements less than or equals to 5 from two ranges is 4 and index 6 is first element of current range and there is no element at left of it in current range so total count = 1 + 3 + 0 = 4.
Last point on this explanation is, we have to store ranges in tree structure with their first value as key and value of the node should be array of pair of first and last index of range. I will use here std::map. This tree structure is required so that we can find all the range having first element less than or equals to our current element in logarithmic time by finding upper bound. That is the reason, I knew in advance when I was comparing element at index 6 that all three ranges known that time are not considerable and only two of them are considerable .
Complexity of solution is,
O(n) to travels from left to right in array, plus
O(n (m + log m)) for finding upper bound in std::map for each element and comparing last value of m ranges, here m is number of ranges know at particular time, plus
O(log q) for finding upper bound in a range if rage last element is greater than number, here q is number of element in particular range (It may or may not requires)
#include <iostream>
#include <map>
#include <vector>
#include <iterator>
#include <algorithm>
unsigned lessThanOrEqualCountFromRage(int num, const std::vector<int>& numList,
const std::map<int,
std::vector<std::pair<int, int>>>& rangeMap){
using const_iter = std::map<int, std::vector<std::pair<int, int>>>::const_iterator;
unsigned count = 0;
const_iter upperBoundIt = rangeMap.upper_bound(num);
for(const_iter it = rangeMap.cbegin(); upperBoundIt != it; ++it){
for(const std::pair<int, int>& range : it->second){
if(numList[range.second] <= num){
count += (range.second - range.first) + 1;
}
else{
auto rangeIt = numList.cbegin() + range.first;
count += std::upper_bound(rangeIt, numList.cbegin() +
range.second, num) - rangeIt;
}
}
}
return count;
}
std::vector<unsigned> lessThanOrEqualCount(const std::vector<int>& numList){
std::vector<unsigned> leftCountList;
leftCountList.reserve(numList.size());
leftCountList.push_back(0);
std::map<int, std::vector<std::pair<int, int>>> rangeMap;
std::vector<int>::const_iterator rangeFirstIt = numList.cbegin();
for(std::vector<int>::const_iterator it = rangeFirstIt + 1, endIt = numList.cend();
endIt != it;){
std::vector<int>::const_iterator preIt = rangeFirstIt;
while(endIt != it && *preIt <= *it){
leftCountList.push_back((it - rangeFirstIt) +
lessThanOrEqualCountFromRage(*it,
numList, rangeMap));
++preIt;
++it;
}
if(endIt != it){
int rangeFirstIndex = rangeFirstIt - numList.cbegin();
int rangeLastIndex = preIt - numList.cbegin();
std::map<int, std::vector<std::pair<int, int>>>::iterator rangeEntryIt =
rangeMap.find(*rangeFirstIt);
if(rangeMap.end() != rangeEntryIt){
rangeEntryIt->second.emplace_back(rangeFirstIndex, rangeLastIndex);
}
else{
rangeMap.emplace(*rangeFirstIt, std::vector<std::pair<int, int>>{
{rangeFirstIndex,rangeLastIndex}});
}
leftCountList.push_back(lessThanOrEqualCountFromRage(*it, numList,
rangeMap));
rangeFirstIt = it;
++it;
}
}
return leftCountList;
}
int main(int , char *[]){
std::vector<int> numList{2, 1, 3, 4, 7, 6, 5, 8, 9, 10, 12,
0, 11, 13, 8, 9, 12, 20, 30, 60};
std::vector<unsigned> countList = lessThanOrEqualCount(numList);
std::copy(countList.cbegin(), countList.cend(),
std::ostream_iterator<unsigned>(std::cout, ", "));
std::cout<< '\n';
}
Output:
0, 0, 2, 3, 4, 4, 4, 7, 8, 9, 10, 0, 11, 13, 9, 11, 15, 17, 18, 19,
Yes, It can be done in better time complexity compared to O(N^2) i.e O(NlogN). We can use the Divide and Conquer Algorithm and Tree concept.
want to see the source code of above mentioned two algorithms???
Visit Here .
I think O(N^2) should be the worst case. In this situation, we will have to traverse the array at least two times.
I have tried in O(N^2):
import java.io.*;
import java.lang.*;
public class GFG {
public static void main (String[] args) {
int a[]={1,2,1,1,2};
int i=0;
int count=0;
int b[]=new int[a.length];
for(i=0;i<a.length;i++)
{
for(int c=0;c<i;c++)
{
if(a[i]>=a[c])
{
count++;
}
}
b[i]=count;
count=0;
}
for(int j=0;j<b.length;j++)
System.out.print(b[j]+" ");
}`
I'm trying to remove every nth number from a list in a for loop, but something's gone wrong
There's a variable that determines what numbers to remove
If I had a list of 1 to 10, and I tried removing every second number, and then third
I should get 1, 3, 5, 7, 9 after removing every second number,
and 1, 3, 7, 9 after removing every third (only one number)
for i in range(repeatAmount):
multiple = int(input())
del numberVar[1::multiple]
print(numberVar)
This code returns [1, 3, 5, 7, 9] after removing every second number, which is correct
But then returns [1, 5, 7] after removing every third number
I have no idea what's going wrong
Change this line
del numberVar[1::multiple]
to this line:
del numberVar[multiple-1::multiple]
Output: [1, 3, 7, 9]
In your loop, you referenced index #1 both times as your starting index, hence you removed the first and fourth elements (3 and 9) in the second iteration of the loop.
I have two collections of elements. How can I pick out those with duplicates and put them into each group with least amount of comparison? Preferably in C++.
For Example given
Array 1 = {1, 1, 2, 2, 3, 4, 5, 5, 1, 1, 2, 2, 4, 5, 8, …}
Array 2 = {2, 1, 1, 2, 2, 4, 7, 7, 8, 8, 2, 2, 4, 4, 8, …}.
At first, I want to cluster data.
Array 1 = { Group 1 = {1, 1, 1, 1, …}, Group 2 = {2, 2, 2, 2, …}, Group 3 = {3, …}, Group 4 = {4, 4, …}, Group 5 = {5, 5, 5, …}, Group 6 = {8, …} }.
Array 2 = { Group 1 = {1, 1, …}, Group 2 = {2, 2, 2, 2, 2 …}, Group 3 = {4, 4 ,4, …}, Group 4 = {7, 7, …}, Group 5 = {8, 8, 8 …} }.
And second, I want data matching.
Group 1 of Array 1 == Group 1 of Array 2
Group 2 of Array 1 == Group 2 of Array 2
Group 4 of Array 1 == Group 3 of Array 2
Group 6 of Array 1 == Group 5 of Array 2
How can I solve this problem in C++? Please give me your brilliant tips.
Additionally, I will explain my problem in detail. I have two data sets which is calculated in stereo image. Array 1 is data of left camera, and Array 2 is data of right camera. My final goal is to match groups which have same values such as group 6 of array1 and group 5 of array 2. Data ordering is not my consideration. I just want to find same values between groups in two arrays. (Will you recommend me to use data ordering first to reduce the number of comparison? ).
In order to solve this problem, should I use ‘std::map’ for data clustering, and compare those N! times (N: no. of groups in array 1 or 2)? Is this best way that I can do?
I’d like to get your advice. Thank you for sharing my problems.
My conclusion
My approach is to use map container in C++ STL.
Make 2 map containers (Array1_map, Array2_map).
Insert value of each array into the map containers as a key, and insert index of each array into the map as a value. (Two data of both arrays are orderly saved in a map without duplication.)
Use find() member function of map container for data matching.
After data matching, I was able to get the indexes of each array which have the matched keys (corresponding keys).
Thank you for all your helpful answers!
The easiest way I can see to do this is to construct a histogram of each array. Then you can compare those histograms together. That should be O(NlogN) to convert each array to a histogram where N is the array size and then O(N) to compare the histograms when N is the number of unique elements in the array (size of the map). That would look like
int arr1[] = {...};
int arr2[] = {...};
std::map<int, int> arr1_histogram, arr2_histogram;
for (auto e : arr1)
arr1_histogram[e]++;
for (auto e : arr2)
arr2_histogram[e]++;
if (arr1_histogram == arr2_histogram)
// true case
else
// false case
vector<int> data = {3, 1, 5, 3, 3, 8, 7, 3, 2};
std::nth_element(data.begin(), data.begin() + median, data.end());
Will this always result in:
data = {less, less, 3, 3, 3, 3, larger, larger, larger} ?
Or would a other possible outcome be:
data = {3, less, less, 3, 3, 3, larger, larger, larger} ?
I've tried it multiple times on my machine wich resulted in the nth values always being contiguous. But that's not proof ;).
What it's for:
I want to building a unique Kdtree but I have duplicates in my vector. Currently I'm using nth_element to find the median value. The issue is to select a unique/reconstructible median, without having to traverse the vector again. If the median values were contiguous I could choose a unique median, without much traversing.
No. The documentation does not specify such behavior, and with a few minutes of experimentation, it was pretty easy to find a test case where the dupes weren't contiguous on ideone:
#include <iostream>
#include <algorithm>
int main() {
int a[] = {2, 1, 2, 3, 4};
std::nth_element(a, a+2, a+5);
std::cout << a[1];
return 0;
}
Output:
1
If the dupes were contiguous, that output would have been 2.
I have just tried several not-so-simple examples, and on the third got non-contiguous output.
Program
#include <vector>
#include <iostream>
#include <algorithm>
int main() {
std::vector<int> a = {1, 3, 3, 2, 1, 3, 5, 5, 5, 5};
std::nth_element(a.begin(), a.begin() + 5, a.end());
for(auto v: a) std::cout << v << " ";
std::cout << std::endl;
}
with gcc 4.8.1 under Linux, with std=c++11, gives me output
3 1 1 2 3 3 5 5 5 5
while the n-th element is 3.
So no, the elements are not always contiguous.
I also think that even a simpler way, with no thinking of a good test case, was just generating long random arrays with many duplicate elements and checking whether it holds. I think it will break on the first or second attempt.
My question is whether or not a heap can be "correct". I have an assignment asking me to do a heap sort but first build a heap using an existing array. If I look through the grader code it shows me that there is an exact answer. The way T implemented the heap build I get a slightly different answer but as far as i know is by definition a heap and therefore correct.
The "correct" array order is
{15, 12, 6, 11, 10, 2, 3, 1, 8}
but I get
{15, 12, 10, 11, 2, 6, 3, 1, 8}
The original vector is
{2, 8, 6, 1, 10, 15, 3, 12, 11}
void HeapSort::buildHeap(std::vector<CountedInteger>& vector)
{
std::vector<CountedInteger> temp;
for(int i = 0; i < vector.size(); i++)
{
temp.push_back(vector[i]);
fixDown(temp, i);
}
vector.swap(temp);
for(int i = 0; i < vector.size(); i++)
{
std::cout<< vector[i]<<std::endl;
}
}
void HeapSort::sortHeap(std::vector<CountedInteger>& vector)
{
}
inline unsigned int HeapSort::p(int i)
{
return ((i-1)/2);
}
void HeapSort::fixDown(std::vector<CountedInteger>& vector, int node)
{
if(p(node) == node) return;
if(vector[node] > vector[p(node)])
{
CountedInteger temp = vector[node];
vector[node] = vector[p(node)];
vector[p(node)] = temp;
fixDown(vector, p(node));
}
There are many possible ways to create a max-heap from an input. You give the example:
15, 12, 10, 11, 2, 6, 3, 1 8
15
12 10
11 2 6 3
1 8
It fulfills the heap criterion, so it is a correct max-heap. The other example is:
15, 12, 6, 11, 10, 2, 3, 1, 8
15
12 6
11 10 2 3
1 8
This also fulfills the heap criterion, so it is also a correct max-heap.
Max-heap criterion: Each node is greater than any of its child nodes.
A simpler example is 1, 2, 3, for which there are two heaps,
3 3
/ \ / \
1 2 2 1
Creating a heap out of an array is definitely an operation that can result in multiple different but valid heaps.
If you look at a trivial example, it is obvious that at least some subtrees of one node could switch positions. In the given example, 2 and 7 could switch positions. 25 and 1 could also switch positions. If the heap has minimum and maximum depth equal, then the subtrees of any node can switch positions.
If your grader is automatic, it should be implemented in a way to check the heap property and not the exact array. If your grader is a teacher, you should formally prove the correctness of your heap in front of them, which is trivial.