I wish to know if there is any STL function in c++, to get the dimensions of a vector.
For example,
vec = [[1, 2, 3],
[4, 5, 6]]
The dimensions are (2, 3)
I am aware of size() function. But the function does not return dimensions.
In the above example, vec.size() would have returned 2.
To get second dimension, I would have to use vec[0].size(), which would be 3
In C++, a(n std::)vector is, by definition, 1D-vector of size() elements, which can be changed in runtime.
You can define a vector of vectors (e.g., std::vector<std::vector<int>>), but that doesn't have a constraint that the 'inner' dimensions are the same. E.g., {{1, 2, 3}, {1, 2}} is valid.
Therefore, inner dimensions are ambiguous. What you can do, if you maintain it to be the same and if you're sure that you've got elements, is to query v[0].size() as well, and so on.
As said in lorro's answer, you likely want to find the dimensions of a std::vector<std::vector<int>>.
Finding the outer dimension is easy, since all you have to do is vec.size(). But the inner vectors can be of any length, and don't have to be the same length. Assuming you want the minimum, this is doable with STL functions.
We can use std::transform to fill a vector with the dimensions of the inner vectors, and then use std::min_element to find the smaller of those.
#include <vector>
#include <algorithm>
#include <iostream>
int main() {
std::vector<std::vector<int>> vec = {{1, 2}, {3, 4, 5}};
std::vector<std::size_t> dim(vec.size());
std::transform(vec.cbegin(), vec.cend(), dim.begin(),
[](auto &v) { return v.size(); });
std::size_t min = *std::min_element(dim.cbegin(), dim.cend());
std::cout << "(" << vec.size() << ", " << min << ")\n";
return 0;
}
Output:
(2, 2)
Related
Let's say I have a a vector<vector<int>>. I want to use ranges::transform in such a way that I get
vector<vector<int>> original_vectors;
using T = decltype(ranges::views::transform(original_vectors[0], [&](int x){
return x;
}));
vector<int> transformation_coeff;
vector<T> transformed_vectors;
for(int i=0;i<n;i++){
transformed_vectors.push_back(ranges::views::transform(original_vectors[i], [&](int x){
return x * transformation_coeff[i];
}));
}
Is such a transformation, or something similar, currently possible in C++?
I know its possible to simply store the transformation_coeff, but it's inconvenient to apply it at every step. (This will be repeated multiple times so it needs to be done in O(log n), therefore I can't explicitly apply the transformation).
Yes, you can have a vector of ranges. The problem in your code is that you are using a temporary lambda in your using statement. Because of that, the type of the item you are pushing into the vector later is different from T. You can solve it by assigning the lambda to a variable first:
vector<vector<int>> original_vectors;
auto lambda = [&](int x){return x;};
using T = decltype(ranges::views::transform(original_vectors[0], lambda));
vector<T> transformed_vectors;
transformed_vectors.push_back(ranges::views::transform(original_vectors[0], lambda));
It is not possible in general to store different ranges in a homogeneous collection like std::vector, because different ranges usually have different types, especially if transforms using lambdas are involved. No two lambdas have the same type and the type of the lambda will be part of the range type. If the signatures of the functions you want to pass to the transform are the same, you could wrap the lambdas in std::function as suggested by #IlCapitano (https://godbolt.org/z/zGETzG4xW). Note that this comes at the cost of the additional overhead std::function entails.
A better option might be to create a range of ranges.
If I understand you correctly, you have a vector of n vectors, e.g.
std::vector<std::vector<int>> original_vector = {
{1, 5, 10},
{2, 4, 8},
{5, 10, 15}
};
and a vector of n coefficients, e.g.
std::vector<int> transformation_coeff = {2, 1, 3};
and you want a range of ranges representing the transformed vectors, where the ith range represents the ith vector's elements which have been multiplied by the ith coefficient:
{
{ 2, 10, 20}, // {1, 5, 10} * 2
{ 2, 4, 8}, // {2, 4, 8} * 1
{15, 30, 45} // {5, 10, 15} * 3
}
Did I understand you correctly? If yes, I don't understand what you mean with your complexity requirement of O(log n). What does n refer to in this scenario? How would this calculation be possible in less than n steps? Here is a solution that gives you the range of ranges you want. Evaluating this range requires O(n*m) multiplications, where m is an upper bound for the number of elements in each inner vector. I don't think it can be done in less steps because you have to multiply each element in original_vector once. Of course, you can always just evaluate part of the range, because the evaluation is lazy.
C++20
The strategy is to first create a range for the transformed i-th vector given the index i. Then you can create a range of ints using std::views::iota and transform it to the inner ranges:
auto transformed_ranges = std::views::iota(0) | std::views::transform(
[=](int i){
// get a range containing only the ith inner range
auto ith = original_vector | std::views::drop(i) | std::views::take(1) | std::views::join;
// transform the ith inner range
return ith | std::views::transform(
[=](auto const& x){
return x * transformation_coeff[i];
}
);
}
);
You can now do
for (auto const& transformed_range : transformed_ranges){
for (auto const& val : transformed_range){
std::cout << val << " ";
}
std::cout<<"\n";
}
Output:
2 10 20
2 4 8
15 30 45
Full Code on Godbolt Compiler Explorer
C++23
This is the perfect job for C++23's std::views::zip_transform:
auto transformed_ranges = std::views::zip_transform(
[=](auto const& ith, auto const& coeff){
return ith | std::views::transform(
[=](auto const& x){
return x * coeff;
}
);
},
original_vector,
transformation_coeff
);
It's a bit shorter and has the added benefit that transformation_coeff is treated as a range as well:
It is more general, because we are not restricted to std::vectors
In the C++20 solution you get undefined behaviour without additional size checking if transformation_coeff.size() < original_vector.size() because we are indexing into the vector, while the C++23 solution would just return a range with fewer elements.
Full Code on Godbold Compiler Explorer
Suppose that we have a very long array, of, say, int to make the problem simpler.
What is the fastest way (or just a fast way, if it's not the fastest), in C++ to see if an array has more than one common elements in C++?
To clarify, this function should return this:
[2, 5, 4, 3] => false
[2, 8, 2, 5, 7, 3, 4] => true
[8, 8, 5] => true
[1, 2, 3, 4, 1, 7, 1, 1, 7, 1, 2, 2, 3, 4] => true
[9, 1, 12] => false
One strategy is to loop through the array and for each array element loop through the array again to check. However, this can be very costly and expensive (literally O(n^2)). Is there any better way?
(✠Update Below) Insert the array elements to a std::unordered_set and if the insertion fails, it means you have duplicates.
Something like as follows:
#include <iostream>
#include <vector>
#include <unordered_set>
bool has_duplicates(const std::vector<int>& vec)
{
std::unordered_set<int> set;
for (int ele : vec)
if (const auto [iter, inserted] = set.emplace(ele); !inserted)
return true; // has duplicates!
return false;
}
int main()
{
std::vector<int> vec1{ 1, 2, 3 };
std::cout << std::boolalpha << has_duplicates(vec1) << '\n'; // false
std::vector<int> vec2{ 12, 3, 2, 3 };
std::cout << std::boolalpha << has_duplicates(vec2) << '\n'; // true
}
✠Update: As discussed in the comments, this can or may not be the fastest solution. In OP's case, as explained in Marcus Müller's answer, anO(N·log(N)) method would be better, which we can achieve by having a sorted array check for dupes.
Here is a quick benchmark that I made for the two cases "UnorderedSetInsertion" and the "ArraySort". Following are the result for GCC 10.3, C++20, O3:
This is nearly just a sorting problem, just that you can abort the sorting once you've hit a single equality and return true.
So, if you're memory-limited (That's often the case, not actually time-limited), an in-place sorting algorithm that aborts when it encounters to identical elements will do; so, std::sort with a comparator function that raises an exception when it encounters equality. Complexity would be O(N·log(N)), but let's be honest here: the fact that this is probably less indirect in memory addressing then the creation of a tree-like bucket structure might help. In that sense, I can only recommend you actually compare this to JeJos solution – that looks pretty reasonable, too!
The thing here is that there's very likely not a one-size-fits-all solution: what is fastest will depend on the amount of integers we're talking about. Even quadratic complexity might be better than any of our "clever" answers if that keeps memory access nice and linear – I'm almost certain your speed here is not bounded by your CPU, but by the amount of data you need to shuffle to and from RAM.
How about binning data (or create a histogram), and check for mode of the resultant data. A mode > 1 indicates a repeat value.
This question already has answers here:
Can an array be grouped more efficiently than sorted?
(4 answers)
Closed 3 years ago.
How can i group a series of integer numbers, eg., [4, 2, 3, 3, 2, 4, 1, 2, 4] to become [4, 4, 4, 2, 2, 2, 3, 3, 1] without using any sorting algorithm.
Note that i don't need the result to be in any sorted order, but i do need the suggested algorithm to group a million of numbers faster than qsort.
This should work if you don't care too much about using extra space. It first stores the number of occurrences of each number in an unordered_map and then creates a vector that contains each value in the map, repeated the number of times it was seen in the original vector. See the documentation for insert for how this works. The [] operator for an unordered_map works in O(1) on average. So creating the unordered_map takes O(N) time. Iterating through the map and populating the return vector again takes O(N) time, so this whole thing should run in O(N). Note that this creates two extra copies of the data.
In the worst case, the [] operator takes O(N) time, so the only way to really know if this is faster than qsort would be to measure it.
#include <vector>
#include <unordered_map>
#include <iostream>
std::vector<int> groupNumbers(const std::vector<int> &input)
{
std::vector<int> grouped;
std::unordered_map<int, int> counts;
for (auto &x: input)
{
++counts[x];
}
for (auto &x: counts)
{
grouped.insert(grouped.end(), x.second, x.first);
}
return grouped;
}
// example
int main()
{
std::vector<int> test{1,2,3,4,3,2,3,2,3,4,1,2,3,2,3,4,3,2};
std::vector<int> result(groupNumbers(test));
for (auto &x: result)
{
std::cout << x << std::endl;
}
return 0;
}
A question that might appear trivial, but I am wondering if there's a way of obtaining the count of integers made unique after I transform an array containing repeated integers into an unordered_set. To be clear, I start with some array, turn into an unordered set, and suddenly, the unordered_set only contains unique integers, and I am simply after the repeat number of the integers in the unordered_set.
Is this possible at all? (something like unordered_set.count(index) ?)
A question that might appear trivial, but I am wondering if there's a way of obtaining the count of integers made unique after I transform an array containing repeated integers into an unordered_set.
If the container is contiguous, like an array, then I believe you can use ptrdiff_t to count them after doing some iterator math. I'm not sure about non-contiguous containers, though.
Since you start with an array:
Call unique on the array
unique returns iter.end()
Calculate ptrdiff_t count using iter.begin() and iter.end()
Remember that the calculation in step 3 needs to be adjusted for the sizeof and element.
But to paraphrase Beta, some containers lend themselves to this, and other do not. If you have an unordered set (or a map or a tree), then the information will not be readily available.
According to your answer to the user2357112's question I will write a solution.
So, let's assume that instead of unordered_set we will use a vector and our vector has values like this:
{1, 1, 1, 3, 4, 1, 1, 4, 4, 5, 5};
So, we want to get numbers (in different vector I think) of how many times particular value appears in the vector, right? And in this specific case result would be: 1 appears 5 times, 3 appears one time, 4 appears 3 times and 5 appears 2 times.
To get this done, one possible solution can be like this:
Get unique entries from source vector and store them in different vector, so this vector will contain: 1, 3, 4, 5
Iterate through whole unique vector and count these elements from source vector.
Print result
The code from point 1, can be like this:
template <typename Type>
vector<Type> unique_entries (vector<Type> vec) {
for (auto iter = vec.begin (); iter != vec.end (); ++iter) {
auto f = find_if (iter+1, vec.end (), [&] (const Type& val) {
return *iter == val;
});
if (f != vec.end ()) {
vec.erase (remove (iter+1, vec.end (), *iter), vec.end ());
}
}
return vec;
}
The code from point 2, can be like this:
template <typename Type>
struct Properties {
Type key;
long int count;
};
template <typename Type>
vector<Properties<Type>> get_properties (const vector<Type>& vec) {
vector<Properties<Type>> ret {};
auto unique_vec = unique_entries (vec);
for (const auto& uv : unique_vec) {
auto c = count (vec.begin (), vec.end (), uv); // (X)
ret.push_back ({uv, c});
}
return ret;
}
Of course we do not need Properties class to store key and count value, you can return just a vector of int (with count of elements), but as I said, it is one of the possible solutions. So, by using unique_entries we get a vector with unique entries ( :) ), then we can iterate through the whole vector vec (get_properties, using std::count marked as (X)), and push_back Properties object to the vector ret.
The code from point 3, can be like this:
template <typename Type>
void show (const vector<Properties<Type>>& vec) {
for (const auto& v : vec) {
cout << v.key << " " << v.count << endl;
}
}
// usage below
vector<int> vec {1, 1, 1, 3, 4, 1, 1, 4, 4, 5, 5};
auto properties = get_properties (vec);
show (properties);
And the result looks like this:
1 5
3 1
4 3
5 2
What is worth to note, this example has been written using templates to provide flexibility of choosing type of elements in the vector. If you want to store values of long, long long, short, etc, instead of int type, all you have to do is to change definition of source vector, for example:
vector<unsigned long long> vec2 {1, 3, 2, 3, 4, 4, 4, 4, 3, 3, 2, 3, 1, 7, 2, 2, 2, 1, 6, 5};
show (get_properties (vec2));
will produce:
1 3
3 5
2 5
4 4
7 1
6 1
5 1
which is desired result.
One more note, you can do this with vector of string as well.
vector<string> vec_str {"Thomas", "Rick", "Martin", "Martin", "Carol", "Thomas", "Martin", "Josh", "Jacob", "Jacob", "Rick"};
show (get_properties (vec_str));
And result is:
Thomas 2
Rick 2
Martin 3
Carol 1
Josh 1
Jacob 2
I assume you're trying to get a list of unique values AND the number of their occurences. If that's the case, then std::map provides the cleanest and simplest solution:
//Always prefer std::vector (or at least std::array) over raw arrays if you can
std::vector<int> myInts {2,2,7,8,3,7,2,3,46,7,2,1};
std::map<int, unsigned> uniqueValues;
//Get unique values and their count
for (int val : myInts)
++uniqueValues[val];
//Output:
for (const auto & val : uniqueValues)
std::cout << val.first << " occurs " << val.second << " times." << std::endl;
In this case it doesn't have to be std::unordered_set.
The question is clear, my google- and cplusplus.com/reference-fu is failing me.
std::set_union will contain those elements that are present in both sets only once. std::merge will contain them twice.
For example, with A = {1, 2, 5}; B = {2, 3, 4}:
union will give C = {1, 2, 3, 4, 5}
merge will give D = {1, 2, 2, 3, 4, 5}
Both work on sorted ranges, and return a sorted result.
Short example:
#include <algorithm>
#include <iostream>
#include <set>
#include <vector>
int main()
{
std::set<int> A = {1, 2, 5};
std::set<int> B = {2, 3, 4};
std::vector<int> out;
std::set_union(std::begin(A), std::end(A), std::begin(B), std::end(B),
std::back_inserter(out));
for (auto i : out)
{
std::cout << i << " ";
}
std::cout << '\n';
out.clear();
std::merge(std::begin(A), std::end(A), std::begin(B), std::end(B),
std::back_inserter(out));
for (auto i : out)
{
std::cout << i << " ";
}
std::cout << '\n';
}
Output:
1 2 3 4 5
1 2 2 3 4 5
std::merge keeps all elements from both ranges, equivalent elements from the first range preceding equivalent elements from the second range in the output. Where an equivalent elements appear in both ranges std::set_union takes only the element from the first range, otherwise each element is merged in order as with std::merge.
References: ISO/IEC 14882:2003 25.3.4 [lib.alg.merge] and 25.3.5.2 [lib.set.union].
This is the verification I suggested in the comment I posted to the accepted answer (i.e. that if an element is present in one of the input-sets N times, it will appear N times in the output of set_union - so set_union does not remove duplicate equivalent items in the way we would 'naturally' or 'mathematically' expect - if, however, both input-ranges contained a common item once only, then set_union would appear to remove the duplicate)
#include <vector>
#include <algorithm>
#include <iostream>
#include <cassert>
using namespace std;
void printer(int i) { cout << i << ", "; }
int main() {
int mynumbers1[] = { 0, 1, 2, 3, 3, 4 }; // this is sorted, 3 is dupe
int mynumbers2[] = { 5 }; // this is sorted
vector<int> union_result(10);
set_union(mynumbers1, mynumbers1 + sizeof(mynumbers1)/sizeof(int),
mynumbers2, mynumbers2 + sizeof(mynumbers2)/sizeof(int),
union_result.begin());
for_each(union_result.begin(), union_result.end(), printer);
return 0;
}
This will print: 0, 1, 2, 3, 3, 4, 5, 0, 0, 0,
std::merge merges all elements, without eliminating the duplicates, while std::set_union eliminates the duplicates. That is, the latter applies the rule of union operation of set theory.
To add to the previous answers - beware that the complexity of std::set_union is twice that of std::merge. In practise, this means the comparator in std::set_union may be applied to an element after it has been dereferenced, while with std::merge this is never the case.
Why may this be important? Consider something like:
std::vector<Foo> lhs, rhs;
And you want to produce a union of lhs and rhs:
std::set_union(std::cbegin(lhs), std::cend(lhs),
std::cbegin(rhs), std::cend(rhs),
std::back_inserter(union));
But now suppose Foo is not copyable, or is very expensive to copy and you don't need the originals. You may think to use:
std::set_union(std::make_move_iterator(std::begin(lhs)),
std::make_move_iterator(std::end(lhs)),
std::make_move_iterator(std::begin(rhs)),
std::make_move_iterator(std::end(rhs)),
std::back_inserter(union));
But this is undefined behaviour as there is a possibility of a moved Foo being compared! The correct solution is therefore:
std::merge(std::make_move_iterator(std::begin(lhs)),
std::make_move_iterator(std::end(lhs)),
std::make_move_iterator(std::begin(rhs)),
std::make_move_iterator(std::end(rhs)),
std::back_inserter(union));
union.erase(std::unique(std::begin(union), std::end(union), std::end(union));
Which has the same complexity as std::set_union.