c++ If all condition are met - c++

Is there any way to check if all conditions are met?
e.g:
if(num[i] > (*all*)alt[c])
{
}
Instead of doing it this way
if(num[i] > alt[1] && num[i] > alt[2] && num[i] > alt[3])
{
}
Like is there a shorter way?

You could use a suitable auxiliary function which effectively just calls one of the algorithms, e.g.:
if (is_bigger(num[i], alts)) {
// ...
}
where the function is_bigger() just uses std::all_of() with a suitable condition:
template <typename T, typename Sequence>
bool is_bigger(T const& test, Sequence const& alts) {
return std::all_of(alts.begin(), alts.end(),
[&](T const& other){ return test > other; });
}
all_of() is simple an algorithm which applies a predicate to all elements in the sequence delimited by the begin and end iterator (alts.begin() and alts.end()). The other thing in this function is simple a lambda expression creating a corresponding predicate.

Well, you can take the maximum of all alts, and then compare num[i] to it.
Get the maximum element with:
auto max = std::max_element(alt.begin(), alt.end());

For now no, at least without creating additional code (instead of evaluating at compile-time). But if you use 'inline' variables (currently only an early draft - idea of mine), you could write:
auto FuncCheckingMulCond(auto &inline varSrc, auto &inline varTarget, inline size_t nTimes)
{
for(inline size_t i(0); i < nTimes; ++i)
if(!(varSrc > varTarget[i]))
return false;
return true;
}
And so your example will look like this:
if(FuncCheckingMulCond(num[i], alt, 5))
{
}
Inline variables should be such which value is known at compile-time and as 'FuncCheckingMulCond' contain 'inline' parameters it should be evaluated at compile-time too. '&inline' is an inline reference which means that it instead of storing some pointer, all it's instances are replaced with the variable it had bound to. So in theory - the above code should do exactly the thing you wanted, but unfortunately this isn't part of ISO C++ yet.

Related

C++ Function that takes an array, a predicate and a operator as arguments and apply the operator to array's element where predicate is satisfied

I'm trying to write a function as described in title.
I made a template custom dynamic array called queue, and i need to create a function that checks for every element of this queue if a predicate passed by argument is satisfied; if it is, i need to apply an operator to that element also passed by argument.
First of all I probably need an explanation.. the predicate should be unary so it could be something like "isdigit" or "isalpha", right? and as operator it could be something like a simple post-increment ++.
The main problem is that I really don't have any idea how can I pass operators and predicates as parameters in function.
If I understood the request, here what I tried to do but of course is not working... anyone helping out?
template<typename L, typename OPERATOR>
void change_if(queue<L> &que, bool (*f)(L), OPERATOR op) {
for(int i=0; i<que._size-1; i++) {
if ((*f)(que._queue[i]))
que._queue[i] = op(que._queue[i]);
}
}
int main(){
//as constructor, 3 is referred to size of the queue and 99 to the values.
queue<int> q(3, 99);
change_if(q, isdigit, ++);
}
First I would suggest to pass the predicate also with a inferred type, so you can also pass lambdas, i.e.
template<typename L, typename P, typename OP>
void change_if(queue<L> &que, P pred, OP op) {
for(...)
if(pred(que._queue[i]))
que._queue[i] = op(que._queue[i]);
}
Your idea is a bit inconsistent, as you do the assignment (que._queue[i] = op(que._queue[i]);) and then come up with the idea that op should be something like the post-increment operator.
Both together would boil down to a = a++ which has no effect.
You could go one of two ways:
think of "op" as a function, getting an element and returning an altered one, i.e. L op(const L&l) { return l+1; }. Then use the code above and call
change_if(q, [](auto && l) { return std::isdigit(l); }, [](auto && l) { return l + 1});
think of "op" as a function, changing the element it got, i.e. void op(L&l) { l++;}. Then use if(pred(...)) op(que._queue[i]) and call
change_if(q, [](auto && l) { return std::isdigit(l); }, [](auto && l) { l++; });
As a side note: you cannot simply take the address of std::isdigit as it is overloaded so I wrapped the call in a lambda here.
Another side note: I don't know your queue implementation but < size-1 in the for loop's condition looks a bit like you're missing the last element.

Counting elements greater than a number in vector

I want to count the number of elements greater than a number in a c++ vector. The threshold value is to be taken input from the user.
The code for counting elements greater than a number is given as:
ctr=count_if(v.begin(),v.end(), greater1);
The corresponding function:
bool greater1(int value)
{
return value >= 8;
}
The problem is I would know the threshold value(here 8) only before the count_if function call so I need to pass the threshold t as a parameter. How to establish the same?
N.B. Only for c++11 standard
The easiest way to do this is to use a lambda expression. Using that you can build a functor (called a Closure Object) in the call site of count_if and you can use what you known then inside the body of the lambda. That would leave you with something like
auto minimum_value = /* something that gets the minimum value you want to use for the comparison */
auto count = std::count_if(v.begin(), v.end(),[&](auto const& val){ return val >= minimum_value; });
// ^ use this to capture a reference of minimum_value
Make a function that gives you the threshold function!
auto above(int threshold) {
// This captures a copy of threshold
return [=](int value) {
return value >= threshold;
};
};
You can then get the count using above, just by passing the threshold as an argument:
auto count = count_if(v.begin(), v.end(), above(8));
Like NathanOliver said, we need to "capture" the threshold value to be used internally. A lambda accomplishes that, but how?
When you write a lambda like
int threshold = 8;
std::count_if(/*...*/, [threshold](int next_val){return next_val >= threshold;});
In C++11 and beyond, the compiler uses this lambda syntax to generate a lightweight class that exposes the function call operator like so:
struct my_greater_equal
{
explicit my_greater_equal(int _threshold) : threshold(_threshold){}
bool operator()(int next_val) const
{
return next_val >= threshold;
}
int threshold;
};
(This is only mostly like what a lambda looks like)
Then an instance is created and used in count_if as-if:
std::count_if(my_collection.cbegin(), my_collection.cend(), my_greater_equal{8});
Internally, std::count_if calls my_greater_equal::operator() for each element in your collection.
Pre-C++11 we had to manually create these lightweight function objects (sometimes called functors even if that's not technically correct)
C++03 Demo
Things are much easier now :-)

For loop index type deduction best practice

Let's say, I have a container c of a type that provides a size() method and I want to loop over this container while keeping track of each item's index:
for (/*TODO*/ i = 0; i < c.size(); i++) {...}
In a post-C++11 world, where automatic type deduction solves so many problems nicely. What should we use in place of the TODO above? The only thing that seems correct to me, no matter what the type of size() is, is the following:
for (decltype(c.size()) i = 0; i < c.size(); i++) {...}
But this seems overly verbose and ,in my opinion, doesn't help readability.
Another solution might be this:
for (auto end = c.size(), i = 0; i < end; i++) {...}
But this doesn't help readability either and, of course, doesn't have the same semantics as the original snippet.
So, my question is: what is the best way to deduce the type of a loop index variable, given only the type of the index' limit.
Short answer to the first question in your text: You should replace the /*TODO*/ by unsigned, std::size_t or something similar, meaning: don't bother deducing the type, just pick a type suitable for any reasonable container size.
This would be an unsigned, reasonably large type so the compiler is not tempted to yell at you beacuse of possible precision losses. In the comments above you write that size_t is not guaranteed to be a good replacement to decltype(c.size()), but while it is not impossible to implement a container that has an index incompatible to size_t, such indizes would most surely not be numbers (and thus incompatible to i = 0), and the containers would not have a size method either. A size() method implies a nonnegative integral, and since size_t is designed for exact those numbers, it will be close to impossible to have a container of a size that cannot be represented by it.
Your second question aims at how to deduce the type, and you already have provided the easiest, yet imperfect answers. If you want a solution that is not as verbose as decltype and not as surprising to read as auto end, you could define a template alias and a generator function for the starting index in some utility header:
template <class T>
using index_t = decltype(std::declval<T>().size());
template <class T, class U>
constexpr index_t<T> index(T&&, U u) { return u; }
//and then in the actual location of the loop:
for (auto i = index(c,0); i < c.size(); ++i) {...}
//which is the same as
for (auto i = index_t<std::vector<int>>(0); i < c.size(); ++i) {...}
If you want to have a more general index-type, e.g. for arrays and classes that don't have a size method, it gets a bit more complicated, because template aliases may not be specialized:
template <class T>
struct index_type {
using type = decltype(std::declval<T>().size());
};
template <class T>
using index_t = typename index_type<T>::type;
template <class T, class U>
constexpr index_t<T> index(T&&, U u) { return u; }
//index_type specializations
template <class U, std::size_t N>
struct index_type<U[N]> {
using type = decltype(N);
};
template <>
struct index_type<System::AnsiString::AnsiString> { //YUCK! VCL!
using type = int;
};
However, this is a lot of stuff just for the few cases where you actually need an index and a simple foreach loop is not sufficient.
If c is a container you can use container::size_type.
Here is the precedence that I follow
1) range-for
2) iterator/begin()/end() with type deduced with auto.
For cases where indexing is required, which is the subject here, I prefer to use
for( auto i = 0u; i < c.size(); ++i) {...}
Even if I misses to add u in 0, compiler will warn me anyway.
Would have loved decltype if it is not too verbose
for (decltype(c.size()) i = 0; i < c.size(); i++) {...}
Hmm... this needs C++14 or a compiler that supports auto in the lambda parameters. If you're using this pattern a lot, then a helping function might be useful:
template< typename Container, typename Callable >
void for_each_index( Container& container, Callable callable )
{
for (decltype(container.size()) i = 0; i < container.size(); i++)
{
callable(i);
}
}
Use as:
for_each_index(c, [] (auto index) {
// ...
});
As a matter of fact, I have seen plenty of times (cough llvm, clang) where they do use
for (/* type */ iter = begin(), End = end(); iter != End; ++i);
The advantage of having End evaluated at the beginning is that the compiler can be sure that it doesn't need to call it every time. For collections that where calculating the end is trivial and the compiler is already able to deduce that it doesn't need to call end() multiple times it won't help, but in other cases it will.
Or you could always use a helper:
Implementing enumerate_foreach based on Boost foreach

Returning container from function: optimizing speed and modern style

Not entirely a question, although just something I have been pondering on how to write such code more elegantly by style and at the same time fully making use of the new c++ standard etc. Here is the example
Returning Fibonacci sequence to a container upto N values (for those not mathematically inclined, this is just adding the previous two values with the first two values equal to 1. i.e. 1,1,2,3,5,8,13, ...)
example run from main:
std::vector<double> vec;
running_fibonacci_seq(vec,30000000);
1)
template <typename T, typename INT_TYPE>
void running_fibonacci_seq(T& coll, const INT_TYPE& N)
{
coll.resize(N);
coll[0] = 1;
if (N>1) {
coll[1] = 1;
for (auto pos = coll.begin()+2;
pos != coll.end();
++pos)
{
*pos = *(pos-1) + *(pos-2);
}
}
}
2) the same but using rvalue && instead of & 1.e.
void running_fibonacci_seq(T&& coll, const INT_TYPE& N)
EDIT: as noticed by the users who commented below, the rvalue and lvalue play no role in timing - the speeds were actually the same for reasons discussed in the comments
results for N = 30,000,000
Time taken for &:919.053ms
Time taken for &&: 800.046ms
Firstly I know this really isn't a question as such, but which of these or which is best modern c++ code? with the rvalue reference (&&) it appears that move semantics are in place and no unnecessary copies are being made which makes a small improvement on time (important for me due to future real-time application development). some specific ''questions'' are
a) passing a container (which was vector in my example) to a function as a parameter is NOT an elegant solution on how rvalue should really be used. is this fact true? if so how would rvalue really show it's light in the above example?
b) coll.resize(N); call and the N=1 case, is there a way to avoid these calls so the user is given a simple interface to only use the function without creating size of vector dynamically. Can template metaprogramming be of use here so the vector is allocated with a particular size at compile time? (i.e. running_fibonacci_seq<30000000>) since the numbers can be large is there any need to use template metaprogramming if so can we use this (link) also
c) Is there an even more elegant method? I have a feeling std::transform function could be used by using lambdas e.g.
void running_fibonacci_seq(T&& coll, const INT_TYPE& N)
{
coll.resize(N);
coll[0] = 1;
coll[1] = 1;
std::transform (coll.begin()+2,
coll.end(), // source
coll.begin(), // destination
[????](????) { // lambda as function object
return ????????;
});
}
[1] http://cpptruths.blogspot.co.uk/2011/07/want-speed-use-constexpr-meta.html
Due to "reference collapsing" this code does NOT use an rvalue reference, or move anything:
template <typename T, typename INT_TYPE>
void running_fibonacci_seq(T&& coll, const INT_TYPE& N);
running_fibonacci_seq(vec,30000000);
All of your questions (and the existing comments) become quite meaningless when you recognize this.
Obvious answer:
std::vector<double> running_fibonacci_seq(uint32_t N);
Why ?
Because of const-ness:
std::vector<double> const result = running_fibonacci_seq(....);
Because of easier invariants:
void running_fibonacci_seq(std::vector<double>& t, uint32_t N) {
// Oh, forgot to clear "t"!
t.push_back(1);
...
}
But what of speed ?
There is an optimization called Return Value Optimization that allows the compiler to omit the copy (and build the result directly in the caller's variable) in a number of cases. It is specifically allowed by the C++ Standard even when the copy/move constructors have side effects.
So, why passing "out" parameters ?
you can only have one return value (sigh)
you may wish the reuse the allocated resources (here the memory buffer of t)
Profile this:
#include <vector>
#include <cstddef>
#include <type_traits>
template <typename Container>
Container generate_fibbonacci_sequence(std::size_t N)
{
Container coll;
coll.resize(N);
coll[0] = 1;
if (N>1) {
coll[1] = 1;
for (auto pos = coll.begin()+2;
pos != coll.end();
++pos)
{
*pos = *(pos-1) + *(pos-2);
}
}
return coll;
}
struct fibbo_maker {
std::size_t N;
fibbo_maker(std::size_t n):N(n) {}
template<typename Container>
operator Container() const {
typedef typename std::remove_reference<Container>::type NRContainer;
typedef typename std::decay<NRContainer>::type VContainer;
return generate_fibbonacci_sequence<VContainer>(N);
}
};
fibbo_maker make_fibbonacci_sequence( std::size_t N ) {
return fibbo_maker(N);
}
int main() {
std::vector<double> tmp = make_fibbonacci_sequence(30000000);
}
the fibbo_maker stuff is just me being clever. But it lets me deduce the type of fibbo sequence you want without you having to repeat it.

acceptable fix for majority of signed/unsigned warnings?

I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.)
The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this:
for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ }
Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT_MAX elements, and note that until now we used two ways to fix compiler warning:
for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ }
This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc.
for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ }
This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.)
So I came up with the following idea:
template <typename T> struct vector : public std::vector<T>
{
typedef std::vector<T> base;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int capacity() const { return base::capacity(); }
vector() : base() {}
vector(int n) : base(n) {}
vector(int n, const T& t) : base(n, t) {}
vector(const base& other) : base(other) {}
};
template <typename Key, typename Data> struct map : public std::map<Key, Data>
{
typedef std::map<Key, Data> base;
typedef typename base::key_compare key_compare;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int erase(const Key& k) { return base::erase(k); }
int count(const Key& k) { return base::count(k); }
map() : base() {}
map(const key_compare& comp) : base(comp) {}
template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {}
template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {}
map(const base& other) : base(other) {}
};
// TODO: similar code for other container types
What you see is basically the STL classes with the methods which return size_type overridden to return just 'int'. The constructors are needed because these aren't inherited.
What would you think of this as a developer, if you'd see a solution like this in an existing codebase?
Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator?
(In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)
Don't derive publicly from STL containers. They have nonvirtual destructors which invokes undefined behaviour if anyone deletes one of your objects through a pointer-to base. If you must derive e.g. from a vector, do it privately and expose the parts you need to expose with using declarations.
Here, I'd just use a size_t as the loop variable. It's simple and readable. The poster who commented that using an int index exposes you as a n00b is correct. However, using an iterator to loop over a vector exposes you as a slightly more experienced n00b - one who doesn't realize that the subscript operator for vector is constant time. (vector<T>::size_type is accurate, but needlessly verbose IMO).
While I don't think "use iterators, otherwise you look n00b" is a good solution to the problem, deriving from std::vector appears much worse than that.
First, developers do expect vector to be std:.vector, and map to be std::map. Second, your solution does not scale for other containers, or for other classes/libraries that interact with containers.
Yes, iterators are ugly, iterator loops are not very well readable, and typedefs only cover up the mess. But at least, they do scale, and they are the canonical solution.
My solution? an stl-for-each macro. That is not without problems (mainly, it is a macro, yuck), but it gets across the meaning. It is not as advanced as e.g. this one, but does the job.
I made this community wiki... Please edit it. I don't agree with the advice against "int" anymore. I now see it as not bad.
Yes, i agree with Richard. You should never use 'int' as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful).
Forward
for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) {
/* ... */
}
Backward
You can do this, which is perfectly defined behaivor:
for(std::vector<int>::size_type i = someVector.size() - 1;
i != (std::vector<int>::size_type) -1; i--) {
/* ... */
}
Soon, with c++1x (next C++ version) coming along nicely, you can do it like this:
for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) {
/* ... */
}
Decrementing below 0 will cause i to wrap around, because it is unsigned.
But unsigned will make bugs slurp in
That should never be an argument to make it the wrong way (using 'int').
Why not use std::size_t above?
The C++ Standard defines in 23.1 p5 Container Requirements, that T::size_type , for T being some Container, that this type is some implementation defined unsigned integral type. Now, using std::size_t for i above will let bugs slurp in silently. If T::size_type is less or greater than std::size_t, then it will overflow i, or not even get up to (std::size_t)-1 if someVector.size() == 0. Likewise, the condition of the loop would have been broken completely.
Definitely use an iterator. Soon you will be able to use the 'auto' type, for better readability (one of your concerns) like this:
for (auto i = someVector.begin();
i != someVector.end();
++i)
Skip the index
The easiest approach is to sidestep the problem by using iterators, range-based for loops, or algorithms:
for (auto it = begin(v); it != end(v); ++it) { ... }
for (const auto &x : v) { ... }
std::for_each(v.begin(), v.end(), ...);
This is a nice solution if you don't actually need the index value. It also handles reverse loops easily.
Use an appropriate unsigned type
Another approach is to use the container's size type.
for (std::vector<T>::size_type i = 0; i < v.size(); ++i) { ... }
You can also use std::size_t (from <cstddef>). There are those who (correctly) point out that std::size_t may not be the same type as std::vector<T>::size_type (though it usually is). You can, however, be assured that the container's size_type will fit in a std::size_t. So everything is fine, unless you use certain styles for reverse loops. My preferred style for a reverse loop is this:
for (std::size_t i = v.size(); i-- > 0; ) { ... }
With this style, you can safely use std::size_t, even if it's a larger type than std::vector<T>::size_type. The style of reverse loops shown in some of the other answers require casting a -1 to exactly the right type and thus cannot use the easier-to-type std::size_t.
Use a signed type (carefully!)
If you really want to use a signed type (or if your style guide practically demands one), like int, then you can use this tiny function template that checks the underlying assumption in debug builds and makes the conversion explicit so that you don't get the compiler warning message:
#include <cassert>
#include <cstddef>
#include <limits>
template <typename ContainerType>
constexpr int size_as_int(const ContainerType &c) {
const auto size = c.size(); // if no auto, use `typename ContainerType::size_type`
assert(size <= static_cast<std::size_t>(std::numeric_limits<int>::max()));
return static_cast<int>(size);
}
Now you can write:
for (int i = 0; i < size_as_int(v); ++i) { ... }
Or reverse loops in the traditional manner:
for (int i = size_as_int(v) - 1; i >= 0; --i) { ... }
The size_as_int trick is only slightly more typing than the loops with the implicit conversions, you get the underlying assumption checked at runtime, you silence the compiler warning with the explicit cast, you get the same speed as non-debug builds because it will almost certainly be inlined, and the optimized object code shouldn't be any larger because the template doesn't do anything the compiler wasn't already doing implicitly.
You're overthinking the problem.
Using a size_t variable is preferable, but if you don't trust your programmers to use unsigned correctly, go with the cast and just deal with the ugliness. Get an intern to change them all and don't worry about it after that. Turn on warnings as errors and no new ones will creep in. Your loops may be "ugly" now, but you can understand that as the consequences of your religious stance on signed versus unsigned.
vector.size() returns a size_t var, so just change int to size_t and it should be fine.
Richard's answer is more correct, except that it's a lot of work for a simple loop.
I notice that people have very different opinions about this subject. I have also an opinion which does not convince others, so it makes sense to search for support by some guru’s, and I found the CPP core guidelines:
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
maintained by Bjarne Stroustrup and Herb Sutter, and their last update, upon which I base the information below, is of April 10, 2022.
Please take a look at the following code rules:
ES.100: Don’t mix signed and unsigned arithmetic
ES.101: Use unsigned types for bit manipulation
ES.102: Use signed types for arithmetic
ES.107: Don’t use unsigned for subscripts, prefer gsl::index
So, supposing that we want to index in a for loop and for some reason the range based for loop is not the appropriate solution, then using an unsigned type is also not the preferred solution. The suggested solution is using gsl::index.
But in case you don’t have gsl around and you don’t want to introduce it, what then?
In that case I would suggest to have a utility template function as suggested by Adrian McCarthy: size_as_int