are they adding copy_if to c++0x? - c++

It's very annoying that copy_if is not in C++. Does anyone know if it will be in C++0x?

Since the C++0x is not yet finalized, you can only take a look at the most recent draft.

In the meantime, it's not very hard to make your own copy_if() using remove_copy_if():
#include <functional>
struct my_predicate : std::unary_function<my_arg_type, bool> {
bool operator()(my_arg_type const& x) const { ... }
};
// To perform "copy_if(x, y, z, my_predicate())", write:
remove_copy_if(x, y, z, std::not1(my_predicate()));
Using not1() requires your predicate class to supply a nested type, argument_type, identifying the type of the argument -- as shown above, one convenient way to do this is to derive from unary_function<T, U>, where T is the argument type.

Just for completeness, in case someone googles his/her way to this question, it should be mentioned that now (in C++11 and later) there is a copy if algorithm. It behaves as expected (copies the elements in a range, for which some predicate returns true, to another range).
A typical use case would be
std::vector<int> foo{ 25, 15, 5, -5, -15 };
std::vector<int> bar;
// copy only positive numbers:
auto it = std::copy_if (foo.begin(), foo.end(), std::back_inserter(bar),
[](int i){return !(i<0);
});

Related

how to sum up a vector of vector int in C++ without loops

I try to implement that summing up all elements of a vector<vector<int>> in a non-loop ways.
I have checked some relevant questions before, How to sum up elements of a C++ vector?.
So I try to use std::accumulate to implement it but I find it is hard for me to overload a Binary Operator in std::accumulate and implement it.
So I am confused about how to implement it with std::accumulate or is there a better way?
If not mind could anyone help me?
Thanks in advance.
You need to use std::accumulate twice, once for the outer vector with a binary operator that knows how to sum the inner vector using an additional call to std::accumulate:
int sum = std::accumulate(
vec.begin(), vec.end(), // iterators for the outer vector
0, // initial value for summation - 0
[](int init, const std::vector<int>& intvec){ // binaryOp that sums a single vector<int>
return std::accumulate(
intvec.begin(), intvec.end(), // iterators for the inner vector
init); // current sum
// use the default binaryOp here
}
);
In this case, I do not suggest using std::accumulate as it would greatly impair readability. Moreover, this function use loops internally, so you would not save anything. Just compare the following loop-based solution with the other answers that use std::accumulate:
int result = 0 ;
for (auto const & subvector : your_vector)
for (int element : subvector)
result += element;
Does using a combination of iterators, STL functions, and lambda functions makes your code easier to understand and faster? For me, the answer is clear. Loops are not evil, especially for such simple application.
According to https://en.cppreference.com/w/cpp/algorithm/accumulate , looks like BinaryOp has the current sum on the left hand, and the next range element on the right. So you should run std::accumulate on the right hand side argument, and then just sum it with left hand side argument and return the result. If you use C++14 or later,
auto binary_op = [&](auto cur_sum, const auto& el){
auto rhs_sum = std::accumulate(el.begin(), el.end(), 0);
return cur_sum + rhs_sum;
};
I didn't try to compile the code though :). If i messed up the order of arguments, just replace them.
Edit: wrong terminology - you don't overload BinaryOp, you just pass it.
Signature of std::accumulate is:
T accumulate( InputIt first, InputIt last, T init,
BinaryOperation op );
Note that the return value is deduced from the init parameter (it is not necessarily the value_type of InputIt).
The binary operation is:
Ret binary_op(const Type1 &a, const Type2 &b);
where... (from cppreference)...
The type Type1 must be such that an object of type T can be implicitly converted to Type1. The type Type2 must be such that an object of type InputIt can be dereferenced and then implicitly converted to Type2. The type Ret must be such that an object of type T can be assigned a value of type Ret.
However, when T is the value_type of InputIt, the above is simpler and you have:
using value_type = std::iterator_traits<InputIt>::value_type;
T binary_op(T,value_type&).
Your final result is supposed to be an int, hence T is int. You need two calls two std::accumulate, one for the outer vector (where value_type == std::vector<int>) and one for the inner vectors (where value_type == int):
#include <iostream>
#include <numeric>
#include <iterator>
#include <vector>
template <typename IT, typename T>
T accumulate2d(IT outer_begin, IT outer_end,const T& init){
using value_type = typename std::iterator_traits<IT>::value_type;
return std::accumulate( outer_begin,outer_end,init,
[](T accu,const value_type& inner){
return std::accumulate( inner.begin(),inner.end(),accu);
});
}
int main() {
std::vector<std::vector<int>> x{ {1,2} , {1,2,3} };
std::cout << accumulate2d(x.begin(),x.end(),0);
}
Solutions based on nesting std::accumulate may be difficult to understand.
By using a 1D array of intermediate sums, the solution can be more straightforward (but possibly less efficient).
int main()
{
// create a unary operator for 'std::transform'
auto accumulate = []( vector<int> const & v ) -> int
{
return std::accumulate(v.begin(),v.end(),int{});
};
vector<vector<int>> data = {{1,2,3},{4,5},{6,7,8,9}}; // 2D array
vector<int> temp; // 1D array of intermediate sums
transform( data.begin(), data.end(), back_inserter(temp), accumulate );
int result = accumulate(temp);
cerr<<"result="<<result<<"\n";
}
The call to transform accumulates each of the inner arrays to initialize the 1D temp array.
To avoid loops, you'll have to specifically add each element:
std::vector<int> database = {1, 2, 3, 4};
int sum = 0;
int index = 0;
// Start the accumulation
sum = database[index++];
sum = database[index++];
sum = database[index++];
sum = database[index++];
There is no guarantee that std::accumulate will be non-loop (no loops). If you need to avoid loops, then don't use it.
IMHO, there is nothing wrong with using loops: for, while or do-while. Processors that have specialized instructions for summing arrays use loops. Loops are a convenient method for conserving code space. However, there may be times when loops want to be unrolled (for performance reasons). You can have a loop with expanded or unrolled content in it.
With range-v3 (and soon with C++20), you might do
const std::vector<std::vector<int>> v{{1, 2}, {3, 4, 5, 6}};
auto flat = v | ranges::view::join;
std::cout << std::accumulate(begin(flat), end(flat), 0);
Demo

Fast way to do lexicographical comparing 2 numbers

I'm trying to sort a vector of unsigned int in lexicographical order.
The std::lexicographical_compare function only supports iterators so I'm not sure how to compare two numbers.
This is the code I'm trying to use:
std::sort(myVector->begin(),myVector->end(), [](const unsigned int& x, const unsigned int& y){
std::vector<unsigned int> tmp1(x);
std::vector<unsigned int> tmp2(y);
return lexicographical_compare(tmp1.begin(),tmp1.end(),tmp2.begin(),tmp2.end());
} );
C++11 introduces std::to_string
You can use from to_string as below:
std::sort(myVector->begin(),myVector->end(), [](const unsigned int& x, const unsigned int& y){
std::string tmp1 = std::to_string(x);
std::string tmp2 = std::to_string(y);
return lexicographical_compare(tmp1.begin(),tmp1.end(),tmp2.begin(),tmp2.end());
} );
I assume you have some good reasons, but allow me to ask: Why are you sorting two int's by using the std::lexicographical order? In which scenario is 0 not less than 1, for example?
I suggest for comparing the scalars you want to use std::less . Same as std lib itself does.
Your code (from the question) might contain a lambda that will use std::less and that will work perfectly. But let us go one step further and deliver some reusable code ready for pasting into your code. Here is one example:
/// sort a range in place
template< typename T>
inline void dbj_sort( T & range_ )
{
// the type of elements range contains
using ET = typename T::value_type;
// use of the std::less type
using LT = std::less<ET>;
// make its instance whose 'operator ()'
// we will use
LT less{};
std::sort(
range_.begin(),
range_.end(),
[&]( const ET & a, const ET & b) {
return less(a, b);
});
}
The above is using std::less<> internally. And it will sort anything that has begin() and end() and public type of the elements it contains. In other words implementation of the range concept.
Example usage:
std::vector<int> iv_ = { 13, 42, 2 };
dbj_sort(iv_);
std::array<int,3> ia_ = { 13, 42, 2 };
dbj_sort(ia_);
std:: generics in action ...
Why is std::less working here? Among other obvious things, because it compares two scalars. std::lexicographical_compare compares two ordinals.
std::lexicographical_compare might be used two compare two vectors, not two elements from one vector containing scalars.
HTH

Force gsl::as_span to return a gsl::span<const T>?

Given the following function, taking: a read-only float span (of either dynamic or any static size):
template <long N> void foobar(gsl::span<const float, N> x);
Let's say I have a vector<float>. Passing that as an argument doesn't work, but neither does using gsl::as_span:
std::vector<float> v = {1, 2, 3};
foobar(gsl::as_span(v));
The above does not compile. Apparently gsl::as_span() returns a gsl::span<float>. Besides not understanding why implicit cast to gsl::span<const float> isn't possible, is there a way to force gsl::as_span() to return a read-only span?
Poking around GSL/span.h on the github page you linked to, I found the following overload of as_span that I believe is the one being called here:
template <typename Cont>
constexpr auto as_span(Cont& arr) -> std::enable_if_t<
!details::is_span<std::decay_t<Cont>>::value,
span<std::remove_reference_t<decltype(arr.size(), *arr.data())>, dynamic_range>>
{
Expects(arr.size() < PTRDIFF_MAX);
return {arr.data(), narrow_cast<std::ptrdiff_t>(arr.size())};
}
There's lots to digest here, but in particular the return type of this function boils down to span<std::remove_reference<decltype(*arr.data())>, ...>. For your given vector<float> gives span<float,...> because decltype(*arr.data()) is float &. I believe the following should work:
const auto & cv = v;
foobar(as_span(cv));
but can't test it myself unfortunately. Let me know if this works.
as_span is not part of MS/GSL any more, probably because gsl::span was lately aligned to std::span - which you could now use with C++20.
You can use std::as_const to get a const container and create a gsl::span from that (or in your case to use gsl::as_span on it).
foobar(gsl::span<const float>(std::as_const(v)));
Please note that depending on the implementation of foobar it is not necessary to template it. You could also just write
void foobar(gsl::span<const float> x);
Per default the length of the span is dynamic_extent, so spans of any length would be accepted. Of course you would not have the length available during compile time.

how to stop automatic conversion from int to float and vice-versa in std::map

I wrote a small program of using std::map here as follows.
int main()
{
map<int,float>m1;
m1.insert(pair<int,float>(10,15.0)); //step-1
m1.insert(pair<float,int>(12.0,13)); //step-2
cout<<"map size="<<m1.size()<<endl; //step -3
I created a map with int type as key and float type as value(key-value) pair for the map m1
Created a normal int-float pair and inserted to map.
Created a cross float-int pair and inserted to map. Now I know that implicit conversion is making this pair to get inserted to map.
Here I just don't want the implicit conversion to take place and compiler error should be given.
What sort of changes I have to do in this program/map to make the comipiler flag an error while we try to do step-2 type operation?
Here's a suggestion:
template <typename K, typename V, typename W>
void map_insert(map<K,V>& m, K k, W w) {
V v = w;
m.insert(pair<K,V>(k,v));
}
int main() {
map<int,float>m1;
map_insert(m1, 10, 15.0);
map_insert(m1, 12.0, 13); // compiler complains here
cout<<"map size="<<m1.size()<<endl;
The third template parameter is a bit awkward but is necessary to allow casting from double to float.
This is not possible (and even if it is possible, then it would be major hack that you shouldn't use).
insert takes a value_type as argument, which is a pair<int const,float>.
So, when you try to insert a pair<float, int>, the compiler looks for a conversion, that is: a constructor of pair<int const, float> that takes a pair<float, int> as argument, which simply exists. In fact, I tried to come up with a partial specialization for that template member (that allows the conversion) which then you could have fail on the remaining template parameter, but I failed to do so; it seems not possible. Anyway, it would be a very dirty hack that you just shouldn't be doing just to avoid a typo. Elsewhere you might need this conversion, and it's a no no to define anything in namespace std anyway.
So what is the solution to "How can I avoid this kind of typos?" ?
Here is what I usually do:
1) All my maps have a typedef for their type.
2) I then use ::value_type (and ::iterator etc) on that type exclusively.
This is not only more robust, it is also more flexible: you can change the container type later on and the code is likely to still work.
So, your code would become:
int main()
{
typedef std::map<int,float> m_type;
m_type m1;
m1.insert(m_type::value_type(10,15.0)); // allowed
m1.insert(m_type::value_type(12.0,13)); // no risk for a typo.
An alternative solution would be to wrap your float in a custom class. This isn't a bad thing to do anyway for (again) reasons of flexibility. It is rarely nice to have written code using a std::map<int, builtin-type> to then realize you need to store more data, and believe me that happens a lot. You might as well start with a class from the beginning.
There may well be a simpler way but this is what occurred to me:
#include <iostream>
#include <map>
template<typename Key, typename Value>
struct typesafe_pair
{
const Key& key;
const Value& value;
explicit typesafe_pair(const Key& key, const Value& value): key(key), value(value) {}
operator typename std::map<Key, Value>::value_type() { return typename std::map<Key, Value>::value_type(key, value); }
};
int main()
{
std::map<int,float>m1;
m1.insert(std::pair<int,float>(10,15.0)); // allowed
m1.insert(std::pair<float,int>(12.0,13)); // allowed!!
m1.insert(typesafe_pair<int,float>(10, 15.0)); // allowed
m1.insert(typesafe_pair<float, int>(12.0, 13)); // compiler error
std::cout << "map size=" << m1.size() << std::endl; //step -3
}
EDIT: 1 Someone may be able to provide a better (more efficient) solution involving rvalue references and perfect forwarding magic that I don't quite grasp yet.
EDIT 2: I think Carlo Wood has the best solution IMHO.

Mapping combination of 4 integers to a single value

I have 4 separate integers that need to be mapped to an arbitrary, constant value.
For example, 4,2,1,1 will map to the number 42
And the number 4,2,1,2 will map to the number 86.
Is there anyway I can achieve this by using #define's or some sort of std::map. The concept seems very simple but for some reason I can't think of a good, efficient method of doing it. The methods I have tried are not working so I'm looking for some guidence on implementation of this.
Will a simple function suffice?
int get_magic_number( int a, int b , int c, int d)
{
if( (a==4)&&(b==2)&&(c==1)&&(d==1) ) return 42;
if( (a==4)&&(b==2)&&(c==1)&&(d==2) ) return 86;
...
throw SomeKindOfError();
}
Now that may look ugly, but you can easily create a macro to pretty it up. (Or a helper class or whatever... I'll just show the macro as I think its easy.
int get_magic_number( int a, int b , int c, int d)
{
#DEFINE MAGIC(A,B,C,D,X) if((a==(A))&&(b==(B))&&(c==(C))&&(d==(D))) return (X);
MAGIC(4,2,1,1, 42);
MAGIC(4,2,1,2, 86);
...
#UNDEF MAGIC
throw SomeKindOfError();
}
If you really care you can probably craft a constexpr version of this too, which you'll never be able to do with std::map based solutions.
Utilize a std::map<std::vector<int>, int>, so that the vector containing {4,2,1,1} will have the value 42, and so on.
Edit: I agree std::tuple would be a better way to go if you have a compiler with C++11 support. I used a std::vector because it is arguably more portable at this stage. You could also use a std::array<int, 4>.
If you do not have access to boost::tuple, std::tuple or std::array, you can implement a type holding 4 integers with a suitable less-than comparison satisfying strict weak ordering:
struct FourInts {
int a,b,c,d;
FourInts() : a(), b(), c(), d() {}
bool operator<(const FourInts& rhs) const {
// implement less-than comparison here
}
};
then use an std::map:
std::map<FourInts, int> m;
If you organise your ints in an array of standard library container, you can use std::lexicographical_compare for the less-than comparison.
If you know there's always 4 integers mapped to 1 integer I suggest you go with:
std::map< boost::tuple<int, int, int, int>, int >
Comparison (lexicographical) is already defined for tuples.