I think the set_intersection STL function described here: http://www.cplusplus.com/reference/algorithm/set_intersection/
is not really a set intersection in the mathematical sense. Suppose that the examples given I change the lines:
int first[] = {5,10,15,20,20,25};
int second[] = {50,40,30,20,10,20};
I would like to get 10 20 20 as a result. But I only get unique answers.
Is there a true set intersection in STL?
I know it's possible with a combination of merges and set_differences, btw. Just checking if I'm missing something obvious.
I would like to get 10 20 20 as a result. But I only get unique answers. Is there a true set intersection in STL?
std::set_intersection works how you want.
You probably get the wrong answer because you didn't update the code properly. If you change the sets to have 6 elements you need to update the lines that sort them:
std::sort (first,first+5); // should be first+6
std::sort (second,second+5); // should be second+6
And also change the call to set_intersection to use first+6 and second+6. Otherwise you only sort the first 5 elements of each set, and only get the intersection of the first 5 elements.
Obviously if you don't include the repeated value in the input, it won't be in the output. If you change the code correctly to include all the input values it will work as you want (live example).
cplusplus.com is not a good reference, if you look at http://en.cppreference.com/w/cpp/algorithm/set_intersection you will see it clearly states the behaviour for repeated elements:
If some element is found m times in [first1, last1) and n times in [first2, last2), the first std::min(m, n) elements will be copied from the first range to the destination range.
Even the example at cplusplus.com is bad, it would be simpler, and harder to introduce your bug, if it was written in idiomatic modern C++:
#include <iostream> // std::cout
#include <algorithm> // std::set_intersection, std::sort
#include <vector> // std::vector
int main () {
int first[] = {5,10,15,20,20,25};
int second[] = {50,40,30,20,10,20};
std::sort(std::begin(first), std::end(first));
std::sort(std::begin(second), std::end(second));
std::vector<int> v;
std::set_intersection(std::begin(first), std::end(first),
std::begin(second), std::end(second),
std::back_inserter(v));
std::cout << "The intersection has " << v.size() << " elements:\n";
for (auto i : v)
std::cout << ' ' << i;
std::cout << '\n';
}
This automatically handles the right number of elements, without ever having to explicitly say 5 or 6 or any other magic number, and without having to create initial elements in the output vector and then resize it to remove them again.
set_intersection requires both ranges to be sorted. In the data you've given, second is not sorted.
If you sort it first, you should get your expected answer.
Related
I am working on a chess engine for some time now. For improving the engine, I wrote some code which loads chess-positions from memory into some tuner code. I have around 1.85B fens on my machine which adds up to 40Gb (24B per position).
After loading, I end up with a vector of positions:
struct Position{
std::bitset<8*24> bits{};
}
void main(){
std::vector<Position> positions{};
// mimic some data loading
for(int i = 0; i < 1.85e9; i++){
positions.push_back(Position{})
}
// ...
}
The data is organised in the following way:
The positions are taken from games where the positions are seperated by just a few moves. Usually about 40-50 consecutive moves come the same game / line and are therefor somewhat equal.
Eventually I will read 16384 position within a single batch and ideally none of those positions come from the same game. Therefor I do some initial sorting before using the data.
My current shuffling method is this:
auto rng = std::default_random_engine {};
std::shuffle(std::begin(positions), std::end(positions), rng);
Unfortunately this takes quiet some time (about 1-2 minutes). Since I dont require perfect shuffles, I assume that some easier shuffles exist.
My second aproach was:
for(int i = 0; i < positions.size(); i++){
std::swap(positions[i], positions[(i*16384) % positions.size()]);
}
which will ensure that there are not going to be positions coming from the same game within a single batch and are evenly spaces by 16384 entries.
I was wondering if there is some even simpler, faster solution. Especially considering that the modulo-operator requires quiet some clock cycles.
I am happy for any "trivial" solution.
Greetings
Finn
There is a tradeoff to be made: Shuffling a a std::vector<size_t> of indices can be expected to be cheaper than shuffling a std::vector<Position> at the cost of an indirection when accessing the Positions via shuffled indices. Actually the example on cppreference for std::iota is doing something along that line (it uses iterators):
#include <algorithm>
#include <iostream>
#include <list>
#include <numeric>
#include <random>
#include <vector>
int main()
{
std::list<int> l(10);
std::iota(l.begin(), l.end(), -4);
std::vector<std::list<int>::iterator> v(l.size());
std::iota(v.begin(), v.end(), l.begin());
std::shuffle(v.begin(), v.end(), std::mt19937{std::random_device{}()});
std::cout << "Contents of the list: ";
for(auto n: l) std::cout << n << ' ';
std::cout << '\n';
std::cout << "Contents of the list, shuffled: ";
for(auto i: v) std::cout << *i << ' ';
std::cout << '\n';
}
Instead of shuffling the list directly, a vector of iterators (with a std::vector indices woud work as well) is shuffled and std::shuffle only needs to swap iterators (/indices) rather than the more costly actual elements (in the example the "costly to swap" elements are just ints).
For a std::list I don't expect a big difference between iterating in order or iterating via shuffled iterators. On the other hand, for a std::vector I do expect a significant impact. Hence, I would shuffle indices, then rearrange the vector once, and profile to see which performs better.
PS: As noted in comments, std::shuffle is already the optimal algorithm to shuffle a range of elements. However, note that it swaps each element twice on average (possible implementation from cppreference):
for (diff_t i = n-1; i > 0; --i) {
using std::swap;
swap(first[i], first[D(g, param_t(0, i))]);
On the other hand, shuffling the indices and then rearranging the vector only requires to copy/move each element once (when additional memory is available).
Randomness won't guarantee that samplings don't get positions from the same game which you wanted to avoid. I propose following pseudo-shuffle that does prevent samplings from the same game (given sufficiently large population):
let N be the length of the longest game + 1
let E be iterator to the end
let i be random index
while E != begin
if i > E - begin
i %= E - begin
--N
Swap elements at i and std::prev(E)
Decrement E
i += N
Can someone explain the difference between range-v3's view adaptors drop and drop_exactly?
One difference I've observed is that if the number of elements in the range that is piped to these views is less than the argument to the view adaptors, drop seems to do the right thing, while drop_exactly seems to invoke UB.
When the argument is less than the number of elements in the range that is piped to these views, they both seem to work the same:
#include <iostream>
#include <vector>
#include <range/v3/all.hpp>
namespace rv = ranges::views;
int main()
{
std::vector<int> v { 1, 2, 3, 4, 5};
for (int i : v | rv::drop(3))
std::cout << i; // prints 45
for (int i : v | rv::drop(7))
std::cout << i; // prints nothing
for (int i : v | rv::drop_exactly(3))
std::cout << i; // prints 45
for (int i : v | rv::drop_exactly(7))
std::cout << i; // prints garbage and crashes
}
Here's the code.
From the documentation for drop_exactly:
Given a source range and an integral count, return a range consisting
of all but the first count elements from the source range. The
source range must have at least that many elements.
While the documentation for drop states:
Given a source range and an integral count, return a range consisting
of all but the first count elements from the source range, or an
empty range if it has fewer elements.
emphasis added
I'm guessing that drop_exactly avoids bounds checks and therefore has the potential to be slightly more performant at the cost of maybe running past the end of the piped-in container, while drop apparently performs bounds checks to make sure you don't.
This is consistent with what you see. If you print stuff from begin()+7 up to begin()+5 (aka end()) of a std::vector, and the abort condition is implemented with != instead of <, then you will continue to print the junk data that sits in the space allocated by the vector until at some point you run over the allocated chunk and the operating system steps in and segfaults your binary.
So, if you know the container to have as many entries as you wish to drop use the faster drop_exactly, otherwise use drop.
I think this is a fairly common question but I didn't find any answer for this using hashing in C++.
I have two arrays, both of the same lengths, which contain some elements, for example:
A={5,3,5,4,2}
B={3,4,1,2,1}
Here, the uncommon elements are: {5,5,1,1}
I have tried this approach- iterating a while loop on both the arrays after sorting:
while(i<n && j<n) {
if(a[i]<b[j])
uncommon[k++]=a[i++];
else if (a[i] > b[j])
uncommon[k++]=b[j++];
else {
i++;
j++;
}
}
while(i<n && a[i]!=b[j-1])
uncommon[k++]=a[i++];
while(j < n && b[j]!=a[i-1])
uncommon[k++]=b[j++];
and I am getting the correct answer with this. However, I want a better approach in terms of time complexity since sorting both arrays every time might be computationally expensive.
I tried to do hashing but couldn't figure it out entirely.
To insert elements from arr1[]:
set<int> uncommon;
for (int i=0;i<n1;i++)
uncommon.insert(arr1[i]);
To compare arr2[] elements:
for (int i = 0; i < n2; i++)
if (uncommon.find(arr2[i]) != uncommon.end())
Now, what I am unable to do is to send only those elements to the uncommon array[] which are uncommon to both of them.
Thank you!
First of all, std::set does not have anything to do with hashing. Sets and maps are ordered containers. Implementations may differ, but most likely it is a binary search tree. Whatever you do, you wont get faster that nlogn with them - the same complexity as sorting.
If you're fine with nlogn and sorting, I'd strongly advice just using set_symmetric_difference algorithm https://en.cppreference.com/w/cpp/algorithm/set_symmetric_difference , it requires two sorted containers.
But if you insist on an implementation relying on hashing, you should use std::unordered_set or std::unordered_map. This way you can be faster than nlogn. You can get your answer in nm time, where n = a.size() and m = b.size(). You should create two unordered_set`s: hashed_a, hashed_b and in two loops check what elements from hashed_a are not in hashed_b, and what elements in hashed_b are not in hashed_a. Here a pseudocode:
create hashed_a and hashed_b
create set_result // for the result
for (a_v : hashed_a)
if (a_v not in hashed_b)
set_result.insert(a_v)
for (b_v : hashed_b)
if (b_v not in hashed_a)
set_result.insert(b_v)
return set_result // it holds the symmetric diference, which you need
UPDATE: as noted in the comments, my answer doesn't count for duplicates. The easiest way to modify it for duplicates would be to use unordered_map<int, int> with the keys for elements in the set and values for number of encounters.
First, you need to find a way to distinguish between the same values contained in the same array (for ex. 5 and 5 in the first array, and 1 and 1 in the second array). This is the key to reducing the overall complexity, otherwise you can't do better than O(nlogn). A good possible algorithm for this task is to create a wrapper object to hold your actual values, and put in your arrays pointers to those wrapper objects with actual data, so your pointer addresses will serve as a unique identifier for objects. This wrapping will cost you just O(n1+n2) operations, but also an additional O(n1+n2) space.
Now your problem is that you have in both arrays only elements unique to each of those arrays, and you want to find the uncommon elements. This means the (Union of both array elements) - (Intersection of both array elements). Therefore, all you need to do is to push all the elements of the first array into a hash-map (complexity O(n1)), and then start pushing all the elements of the second array into the same hash-map (complexity O(n2)), by detecting the collisions (equality of an element from first array with an element from the second array). This comparison step will require O(n2) comparisons in the worst case. So for the maximum performance optimization you could have checked the size of the arrays before starting pushing the elements into the hash-map, and swap the arrays so that the first push will take place with the longest array. Your overall algorithm complexity would be O(n1+n2) pushes (hashings) and O(n2) comparisons.
The implementation is the most boring stuff, so I let it to you ;)
A solution without sorting (and without hashing but you seem to care more about complexity then the hashing itself) is to notice the following : an uncommon element e is an element that is in exactly one multiset.
This means that the multiset of all uncommon elements is the union between 2 multisets:
S1 = The element in A that are not in B
S2 = The element in B that are not in A
Using the std::set_difference, you get:
#include <set>
#include <vector>
#include <iostream>
#include <algorithm>
int main() {
std::multiset<int> ms1{5,3,5,4,2};
std::multiset<int> ms2{3,4,1,2,1};
std::vector<int> v;
std::set_difference( ms1.begin(), ms1.end(), ms2.begin(), ms2.end(), std::back_inserter(v));
std::set_difference( ms2.begin(), ms2.end(), ms1.begin(), ms1.end(), std::back_inserter(v));
for(int e : v)
std::cout << e << ' ';
return 0;
}
Output:
5 5 1 1
The complexity of this code is 4.(N1+N2 -1) where N1 and N2 are the size of the multisets.
Links:
set_difference: https://en.cppreference.com/w/cpp/algorithm/set_difference
compiler explorer: https://godbolt.org/z/o3KGbf
The Question can Be solved in O(nlogn) time-complexity.
ALGORITHM
Sort both array with merge sort in O(nlogn) complexity. You can also use sort-function. For example sort(array1.begin(),array1.end()).
Now use two pointer method to remove all common elements on both arrays.
Program of above Method
int i = 0, j = 0;
while (i < array1.size() && j < array2.size()) {
// If not common, print smaller
if (array1[i] < array2[j]) {
cout << array1[i] << " ";
i++;
}
else if (array2[j] < array1[i]) {
cout << array2[j] << " ";
j++;
}
// Skip common element
else {
i++;
j++;
}
}
Complexity of above program is O(array1.size() + array2.size()). In worst case say O(2n)
The above program gives the uncommon elements as output. If you want to store them , just create a vector and push them into vector.
Original Problem LINK
As I understand it, the range-v3 library's view operations (requires C++17 currently, but to become an official part of the STL in C++20) provides chainable STL-like algorithms that are lazily evaluated. As an experiment, I created the following code to evaluate the first 4 perfect numbers:
#include <iostream>
#include <range/v3/all.hpp>
using namespace std;
int main(int argc, char *argv[]) {
auto perfects = ranges::view::ints(1)
| ranges::view::filter([] (int x) {
int psum = 0;
for (int y = 1; y < x; ++y) {
if (x % y == 0) psum += y;
}
return x == psum;})
| ranges::view::take(3);
std::cout << "PERFECT NUMBERS:" << std::endl;
for (int z : perfects) {
std::cout << z << std::endl;
}
std::cout << "DONE." << std::endl;
}
The code starts with a possible infinite range of numbers (ranges::view::ints(1)), but because the view algorithm ends with ranges::view::take(3) it should halt after finding the first three numbers passing the filter algorithm (a brute-force algorithm to filter out perfect numbers, intentionally not that efficient). Since the first three perfect numbers --- 6, 28, and 496 --- are fairly small, I expect this code to quickly find these, print "DONE." and terminate. And that's exactly what happens:
coliru -- taking 3 perfect numbers works just fine
However, let's say I want to print the first 4 perfect numbers, which are still all fairly small --- 6, 28, 496, and 8128. After printing 8128 the program does not halt and eventually has to be terminated; presumably it is vainly trying to compute the fifth perfect number, 33550336, which is beyond the ability of this brute-force algorithm to efficiently find.
coliru -- taking 4 perfect numbers tries to take 5+
This seems inconsistent to me. I would have understood if both tests had failed (concluding that I had misunderstood the lazy evaluation of range-v3's view algorithms), but the fact that take(3) succeeds and halts while take(4) does not seems like a bug to me, unless I'm misunderstanding things.
I've tried this with several compilers on wandbox and it seems to be persistent (tried clang 6.0.1 and 7.0.0, g++ 8.1.0 and 8.2.0). At least on my local computer, where I found the issue originally, version 0.3.6 of range-v3 is being used, but I'm not sure about coliru and wandbox.
wandbox link
A take view that contains n elements has n + 1 valid iterator values: n that correspond to elements in the range, and the n + 1st past-the-end iterator. It is intended that iterating over the take view necessarily forms each of those n + 1 iterators - indeed, it's useful to extract the underlying iterator value adapted by the take view's end iterator to perform additional computations.
take_view doesn't know that the range it's adapting is a filter, or that your filter predicate is inordinately expensive - it simply assumes that your predicate is O(1) as is necessary for it to provide O(1) iterator operations. (Although we did forget to make that complexity requirement explicit in C++20.) This case is a very good example of why we have complexity requirements: if the iterators of the range being adapted don't meet the Standard's O(1) complexity requirements, the view can't meet its complexity guarantees and reasoning about performance becomes impossible.
Apology:
I'm (partly) answering my own question because I think I've learned what is going on here, mechanically, and because the extra detail won't fit into a comment. I'm not sure of the etiquette, so if this would be better as an edit to the question --- there's still the open question of why the library is designed this way --- please suggest that in the comments I'll happily move it there.
Filtering until finding an end iterator
I don't understand the internals of range-v3 in great detail so I might not have terminology exactly right. In short, there is no inconsistent behavior here. When a call to ranges::view::take follows a call to ranges::view::filter (or ranges::view::remove_if), the resulting view object must set an end iterator at some point during the iteration to break out of the for-loop. If I'd thought about it, I would have imagined that the ranged-based for loop still expands to something like
for (auto it = std::begin(perfects); it != std::end(perfects); ++it) {
...
}
(which, btw, behaves identically in my examples) and that after it has found the required number of elements, at the beginning of the subsequent operator++ call on it, there would hbe special logic to make the result equal to std::end(perfects), so that the loop exits without doing any additional work. But instead, and this makes some sense from an implementation standpoint, the end iterator actually corresponds to the next element returned by the filter/remove_if view. The filter predicate keeps on looping over ranges::view::ints(1) until it finds one for which the predicate returns true; presumably this becomes the end iterator, since it is not printed in the ranged for loop.
An easy demonstration of this is provided by the following code. Here, there are two configurable integers n and m, and the predicate function in filter returns true for x <= n, false for n < x < n+m, and true for x >= m:
#include <iostream>
#include <range/v3/all.hpp>
using namespace std;
int main(int,char**) {
int n = 5;
int m = 3;
auto perfects = ranges::view::ints(1)
| ranges::view::filter([&n,&m] (int x) {
std::cout << "Checking " << x << "... ";
if (x <= n) {
return true;
} else if (x <= n + m) {
std::cout << std::endl;
return false;
}
return true;})
| ranges::view::take(n);
std::cout << "First " << n << " numbers:" << std::endl;
for (int z : perfects) {
std::cout << " take it!" << std::endl;
}
std::cout << "DONE." << std::endl;
}
You can run this code for different values of n and m here: wandbox. The default output is as follows:
First 5 numbers:
Checking 1... take it!
Checking 2... take it!
Checking 3... take it!
Checking 4... take it!
Checking 5... take it!
Checking 6...
Checking 7...
Checking 8...
Checking 9... DONE.
(I didn't rename the variable perfects; clearly it is not a set of perfect numbers anymore). Even after taking the first n successes, the lambda predicate is called until it returns true. Since the integer that returns true, 9, is not printed, it must be the std::end(perfects) that breaks the ranged for-loop.
The remaining mystery to me is why it does this. It's not what I would have expected; it could lead to unexpected behavior (e.g. if lambda function body isn't pure and alters captured objects) and it could have big performance implications, as demonstrated by the original example, which would have to perform roughly 10^15 modulo operations before reaching the integer 33550336.
I would like to apply a function to some elements of an std::vector.I use std::includes to check if a "smaller" vector exists in a "bigger" one, and if exists I would like to apply a function to these elements of the "bigger" vector that are equal to the elements of the "smaller". Any suggestions?
Edit:
The following was incorrectly posted as an answer by the OP
There is a problem with std::search! It finds the first occurrence of a sequence contained in a vector while in my vector these elements are in several positions.Also i have a vector of objects!!!
Not sure what part you're having trouble with, but here's a simple example showing the range of elements contained in the larger vector that are identical to the contents of the smaller one being multiplied by 2. I used std::search instead of std::includes to determine whether the larger vector contains the range of elements in the smaller one because unlike includes, which returns a boolean result, search will return an iterator to the beginning of the contained range in the larger vector.
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
void times_two(int& t)
{
t *= 2;
}
int main()
{
std::vector<int> v1{1,2,3,4,5,6,7,8,9};
std::vector<int> v2{4,5,6};
// find if the larger vector contains the smaller one
auto first = std::search(v1.begin(), v1.end(), v2.begin(), v2.end());
if(first != v1.end()) {
// get the last element in the sub-range
auto last = std::next(first, v2.size());
// apply function to each sub-range element
std::for_each(first, last, times_two);
}
for(auto const& v : v1) {
std::cout << v << ' ';
}
std::cout << '\n';
}
Output:
1 2 3 8 10 12 7 8 9
Edit:
Here's an example that uses boost::find_nth to perform the search.