I'm using a custom comparer function for my next_permutation function, but I don't understand why I'm getting the error:
Expression: invalid operator<
I want my function to work with at least these restrictions in the function body, but keep getting errors:
bool mycomp(int i, int j)
{
return (((i < 0) && (j > 0)) || ((i > 0) && (j < 0)));
};
but when I do it like this, it works fine:
bool mycomp(int i, int j)
{
return (((i < 0) && (j > 0)));
};
I want to also add another restriction, but don't know how.
here is the relevant code with the next_permutation function:
int counter, size, *guests;
for (int i = 2; i <= 9; i++)
{
size = i * 2;
counter = 1;
guests = new int[size];
for (int j = 0; j < size; j += 2)
{
guests[j] = counter;
guests[j + 1] = 0 - counter;
++counter;
}
sort(guests, guests + size);
counter = 0;
while (next_permutation(guests, guests + size, mycomp))
{
++counter;
}
}
I also understand that there is a strict weak ordering requirement. I understood the gist of it after reading about it, but not sure exactly how it applies to this situation. Thank you in advance.
Your compiler is trying to tell you (through a runtime assertion) that your comparator is invalid. It is invalid as it does not respect the strict weak ordering contract for at least two reasons :
1) It is not antisymmetric ( ie f(x, y) implies !f(y, x)) :
std::cout << mycomp(2, -3) << '\n';
std::cout << mycomp(-3, 2) << '\n';
Output:
true
true
2) It is not transitive (ie f(x, y) and f(y, z) imply f(x, z) ):
std::cout << mycomp(2, -3) << '\n';
std::cout << mycomp(-3, 2) << '\n';
std::cout << mycomp(2, 2) << '\n';
Output:
true
true
false
false
Demo
Your probably need to rethink your problem, and how you really want to order your elements while doing the permutations.
Permutation is about ordering. The default ordering is 1<2<3<4<5<6 etc -- the order you are used to.
The custom comparator lets you set an ordering that is different than the default one.
This is useful in a number of situations.
For a toy example, you could set all even numbers to be greater than all odd ones -- 1<3<5<7<...<0<2<4<6<8<....
Example implementation that leverages std::tuple:
std::tuple<bool,unsigned> myhelper( unsigned x ) {
return std::make_tuple( !(x%2), x );
}
bool myorder( unsigned lhs, unsigned rhs ) {
return helper(lhx)<helper(rhs);
}
For types that don't have an operator < that is a strict weak ordering, it lets you provide one. A complex number can be ordered lexicographically, or magnitude-wise: the magnitude-ordering is not a strict weak ordering, and having < be a lexicographic order would be surprising.
Related
I tried to solve this exercise
I got 66 percent
I can not understand why
can you help?
The exercise is:
Write a function:
int solution(vector &A);
that, given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
For example, given A = [1, 3, 6, 4, 1, 2], the function should return 5.
Given A = [1, 2, 3], the function should return 4.
Given A = [−1, −3], the function should return 1.
The solution I wrote is:
#include <algorithm>
#include<cmath>
using namespace std;
int solution(vector<int> &A) {
if (A.size() == 0 || (A.size() == 1 && A.at(0) <= 0))
return 1;
if (A.size() == 1)
return A.at(0) + 1;
sort(A.begin(), A.end());
if (A.at(A.size() - 1) <= 0)
return 1;
auto ip = std::unique(A.begin(), A.end());
A.resize(distance(A.begin(), ip));
A.erase(remove_if(A.begin(), A.end(), [](const int i) { return i < 0; }),A.end());
if (A.at(0) != 1)
return 1;
if (A.size() == 1)
return (A.at(0) != 1 ? 1 : 2);
int i = 0;
for (; i < A.size(); ++i) {
if (A.at(i) != i + 1)
return A.at(i - 1) + 1;
}
return A.at(A.size()) + 1;
}
The following algorithm has a complexity O(n). No need to sort or to erase.
We know that the first missing value is less or equal to n+1, if n is the array size.
Then we simply have to use an array of size n+2, present[n+2], initialised to 0, and then to look at all values A[i]:
if (A[i] <= 1+n && A[i] > 0) present[A[i]] = 1;
Finally, in a second step we simply have to examine the array present[.], and search for the first index k such that present[k]==0.
#include <iostream>
#include <vector>
int find_missing (const std::vector<int> &A) {
int n = A.size();
std::vector<int> present (n+2, 0);
int vmax = n+1;
for (int i = 0; i < n; ++i) {
if (A[i] <= vmax && A[i] > 0) {
present[A[i]] = 1;
}
}
for (int k = 1; k <= vmax; ++k) {
if (present[k] == 0) return k;
}
return -1;
}
int main() {
std::vector<int> A = {1, 2, 0, 3, -3, 5, 6, 8};
int missing = find_missing (A);
std::cout << "First missing element = " << missing << std::endl;
return 0;
}
Well this is wrong
if(A.size()==1)
return A.at(0)+1;
If A is {2} that code will return 3 when the correct answer is 1
Also
A.erase(remove_if(A.begin(), A.end(),[](const int i) {return i < 0; }),A.end());
should be
A.erase(remove_if(A.begin(), A.end(),[](const int i) {return i <= 0; }),A.end());
Also
return A.at(A.size()) + 1;
is a guaranteed vector out of bounds error.
Even a small amount of testing and debugging would have caught these errors. It's a habit you should get into.
I think there are far too many special cases in the code, which only serve to complicate the code and increase the chance of bugs.
This answer is the implementation of the proposal given in the comment by PaulMcKenzie.
So, all credits go to PaulMcKenzie
It is not the fastest solution, but compact. The idea is basically.
Sort the data
Then compare the adjacent values, if the next value is equal to the previous value+1.
If not, then we found a gap. This can be implemented by using the function std::adjacent_find. Description can be found here.
We put all the side conditions into the lambda. If std::adjacent_find cannot find a value, then we take the next possible positive value.
I am not sure, what I could describe more. Please see the below example:
#include <iostream>
#include <vector>
#include <algorithm>
int solution(std::vector<int>& data) {
// Sort
std::sort(data.begin(), data.end());
// Check if there is a gap in the positive values
const auto gap = std::adjacent_find(data.begin(), data.end(), [](const int p, const int n) { return (n !=p) && (n != (p + 1) && p>0); });
// If there is no gap, the take the next positive value
return (gap == data.end()) ? (data.back() > 0 ? data.back() + 1 : 1) : *gap + 1;
}
int main() {
//Some test cases
std::vector<std::vector<int>> testCases{
{1,3,6,4,1,2},
{1,2,3},
{-1,-3}
};
for (auto& testCase : testCases)
std::cout << solution(testCase) << '\n';
return 0;
}
others have already pointed out what are the main errors, but I would like to invite you to try a different solution instead of trying to fix all the bugs and spend much time on debugging, because your solution seems a little overcomplicated.
Here I propose a way you can think about the problem:
What is the minimum number the function can return? Since it returns a positive integer, it is 1, in the case 1 is not in the array. Since that we can use any number <=0 to see if we found our result scanning the vector (see next);
In case one is not in the array, how do I find the wanted number? Your intuition is correct, if your vector is sorted it is easier: you can iterate over your data, and when you find an "hole" between two subsequent elements, then the value of the first element of the hole + 1 is you result
What do I do if the array contains 1 and has no holes? Well, you return the smallest element that is not in the array, so the last element + 1. You may notice that by checking if your "candidate" value (that is a number that shouldn't be returned, so <=0) has changed during the scanning;
Let's go to the code:
int solution(std::vector<int>& v){
int retVal=0;
std::sort(v.begin(), v.end());
for(int i=0; i<v.size()-1; i++){
if(v[i]>0 && v[i+1]>v[i]+1){
retVal=v[i]+1;
break;
}
}
if(retVal==0) {
if (v.back() > 0)
retVal = v.back() + 1;
else
retVal = 1;
}
return retVal;
}
As suggested you can use the standard library a little bit more, but I think this is reasonably simple and efficient.
Other note:
I think your assignment does not bother you with this, but I mention just for completeness. Most of the times you don't want a function to modify your parameters: you can pass the vector "by value" meaning that actually you pass a complete copy of your data, without touching the original one, or you can pass a const reference and create a copy inside the function.
Trying to implement a combination of 4 objects taken 2 at a time without taking into account the arrangement (such must be considered duplicates: so that order is not important) of objects with std::set container:
struct Combination {
int m;
int n;
Combination(const int m, const int n):m(m),n(n){}
};
const auto operator<(const auto & a, const auto & b) {
//explicitly "telling" that order should not matter:
if ( a.m == b.n && a.n == b.m ) return false;
//the case "a.m == b.m && a.n == b.n" will result in false here too:
return a.m == b.m ? a.n < b.n : a.m < b.m;
}
#include <set>
#include <iostream>
int main() {
std::set< Combination > c;
for ( short m = 0; m < 4; ++ m ) {
for ( short n = 0; n < 4; ++ n ) {
if ( n == m ) continue;
c.emplace( m, n );
} }
std::cout << c.size() << std::endl; //12 (but must be 6)
}
The expected set of combinations is 0 1, 0 2, 0 3, 1 2, 1 3, 2 3 which is 6 of those, but resulting c.size() == 12. Also, my operator<(Combination,Combination) does satisfy !comp(a, b) && !comp(b, a) means elements are equal requirement.
What am I missing?
Your code can't work1, because your operator< does not introduce a strict total ordering. One requirement for a strict total ordering is that, for any three elements a, b and c
a < b
and
b < c
imply that
a < c
(in a mathematical sense). Let's check that. If we take
Combination a(1, 3);
Combination b(1, 4);
Combination c(3, 1);
you see that
a < b => true
b < c => true
but
a < c => false
If you can't order the elements you can't use std::set. A std::unordered_set seems to more suited for the task. You just need a operator== to compare for equality, which is trivial and a hash function that returns the same value for elements that are considere identical. It could be as simple as adding m and n.
1 Well, maybe it could work, or not, or both, it's undefined behaviour.
Attached is the working code. The tricky part that you were missing was not adding a section of code to iterate through the already working set to then check the values. You were close! If you need a more thorough answer I will answer questions in the comments. Hope this helps!
#include <set>
#include <iostream>
using namespace std;
struct Combination {
int m;
int n;
Combination(const int m, const int n):m(m),n(n){}
};
const auto operator<(const auto & a, const auto & b) {
//explicitly "telling" that order should not matter:
if ( a.m == b.n && a.n == b.m ) return false;
//the case "a.m == b.m && a.n == b.n" will result in false here too:
return a.m == b.m ? a.n < b.n : a.m < b.m;
}
int main() {
set< Combination > c;
for ( short m = 0; m < 4; ++ m )
{
for ( short n = 0; n < 4; ++ n )
{
//Values are the same we do not add to the set
if(m == n){
continue;
}
else{
Combination s(n,m);
const bool is_in = c.find(s) != c.end();
if(is_in == true){
continue;
}
else{
cout << " M: " << m << " N: " << n << endl;
c.emplace( m, n);
}
}
}
}
cout << c.size() << endl; //16 (but must be 6)
}
I want to know if this backtracking algorithm actually works.
In the text book Foundations of Algorithms, 5th edition, it is defined as follows:
Algorithm 5.4: The Backtracking Algorithm for the Sum-of-Subsets Problem
Problem: Given n positive integers (weights) and a positive integer W,
determine all combinations of the integers that sum up to W.
Inputs: positvie integer n, sorted (nondecreasing order) array of
positive integers w indexed from 1 to n, and a positive integer
W.
Outputs: all combinations of the integers that sum to W.
void sum_of_subsets(index i,
int weight, int total) {
if (promising(i))
if (weight == W)
cout << include[1] through include [i];
else {
include[i + 1] = "yes"; // Include w[i + 1].
sum_of_subsets(i + 1, weight + w[i + 1], total - w[i + 1]);
include[i + 1] = "no"; // Do not include w[i + 1].
sum_of_subsets(i + 1, weight, total - w[i + 1]);
}
}
bool promising (index i); {
return (weight + total >= W) && (weight == W || weight + w[i + 1] <= W);
}
Following our usual convention, n, w, W, and include are not
inputs to our routines. If these variables were defined globally, the
top-level call to sum_of_subsets would be as follows:
sum_of_subsets(0, 0, total);
At the end of chapter 5, exercise 13 asks:
Use the Backtracking algorithm for the Sum-of-Subsets problem (Algorithm 5.4)
to find all combinations of the following numbers that sum to W = 52:
w1 = 2 w2 = 10 w3 = 13 w4 = 17 w5 = 22 w6 = 42
I've implemented this exact algorithm, accounting for arrays that start at 1 and it just does not work...
void sos(int i, int weight, int total) {
int yes = 1;
int no = 0;
if (promising(i, weight, total)) {
if (weight == W) {
for (int j = 0; j < arraySize; j++) {
std::cout << include[j] << " ";
}
std::cout << "\n";
}
else if(i < arraySize) {
include[i+1] = yes;
sos(i + 1, weight + w[i+1], total - w[i+1]);
include[i+1] = no;
sos(i + 1, weight, total - w[i+1]);
}
}
}
int promising(int i, int weight, int total) {
return (weight + total >= W) && (weight == W || weight + w[i+1] <= W);
}
I believe the problem is here:
sos(i + 1, weight, total - w[i+1]);
sum_of_subsets(i+1, weight, total-w[i+1]);
When you reach this line you are not backtracking correctly.
Is anyone able to identify a problem with this algorithm or actually code it to work?
I personally find the algorithm problematic. There is no bounds checking, it uses a lot of globals, and it assumes an array is indexed from 1. I don't think you can copy it verbatim. It's pseudocode for the actual implementation. In C++ arrays always start from 0. So you're likely to have problems when you try do include[i+1] and you are only checking i < arraySize.
The algorithm also assumes you have a global variable called total, which is used by the function promising.
I have reworked the code a bit, putting it inside a class, and simplified it somewhat:
class Solution
{
private:
vector<int> w;
vector<int> include;
public:
Solution(vector<int> weights) : w(std::move(weights)),
include(w.size(), 0) {}
void sos(int i, int weight, int total) {
int yes = 1;
int no = 0;
int arraySize = include.size();
if (weight == total) {
for (int j = 0; j < arraySize; j++) {
if (include[j]) {
std::cout << w[j] << " ";
}
}
std::cout << "\n";
}
else if (i < arraySize)
{
include[i] = yes;
//Include this weight
sos(i + 1, weight + w[i], total);
include[i] = no;
//Exclude this weight
sos(i + 1, weight, total);
}
}
};
int main()
{
Solution solution({ 2, 10, 13, 17, 22, 42 });
solution.sos(0, 0, 52);
//prints: 10 42
// 13 17 22
}
So yes, as others pointed out, you stumbled over the 1 based array index.
That aside, I think you should ask the author for a partial return of the money you paid for the book, because the logic of his code is overly complicated.
One good way not to run into bounds problems is to not use C++ (expecting hail of downvotes for this lol).
There are only 3 cases to test for:
The candidate value is greater than what is remaining. (busted)
The candidate value is exactly what is remaining.
The candidate value is less than what is remaining.
The promising function tries to express that and then the result of that function is re-tested again in the main function sos.
But it could look as simple as this:
search :: [Int] -> Int -> [Int] -> [[Int]]
search (x1:xs) t path
| x1 > t = []
| x1 == t = [x1 : path]
| x1 < t = search xs (t-x1) (x1 : path) ++ search xs t path
search [] 0 path = [path]
search [] _ _ = []
items = [2, 10, 13, 17, 22, 42] :: [Int]
target = 52 :: Int
search items target []
-- [[42,10],[22,17,13]]
Now, it is by no means impossible to achieve a similar safety net while writing C++ code. But it takes determination and a conscious decision on what you are willing to cope with and what not. And you need to be willing to type a few more lines to accomplish what the 10 lines of Haskell do.
First off, I was bothered by all the complexity of indexing and range checking in the original C++ code. If we look at our Haskell code (which works with lists),
it is confirmed that we do not need random access at all. We only ever look at the start of the remaining items. And we append a value to the path (in Haskell we append to the front because speed) and eventually we append a found combination to the result set. With that in mind, bothering with indices is kind of over the top.
Secondly, I rather like the way the search function looks - showing the 3 crucial tests without any noise surrounding them. My C++ version should strive to be as pretty.
Also, global variables are so 1980 - we won't have that. And tucking those "globals" into a class to hide them a bit is so 1995. We won't have that either.
And here it is! The "safer" C++ implementation. And prettier... um... well some of you might disagree ;)
#include <cstdint>
#include <vector>
#include <iostream>
using Items_t = std::vector<int32_t>;
using Result_t = std::vector<Items_t>;
// The C++ way of saying: deriving(Show)
template <class T>
std::ostream& operator <<(std::ostream& os, const std::vector<T>& value)
{
bool first = true;
os << "[";
for( const auto item : value)
{
if(first)
{
os << item;
first = false;
}
else
{
os << "," << item;
}
}
os << "]";
return os;
}
// So we can do easy switch statement instead of chain of ifs.
enum class Comp : int8_t
{ LT = -1
, EQ = 0
, GT = 1
};
static inline
auto compI32( int32_t left, int32_t right ) -> Comp
{
if(left == right) return Comp::EQ;
if(left < right) return Comp::LT;
return Comp::GT;
}
// So we can avoid index insanity and out of bounds problems.
template <class T>
struct VecRange
{
using Iter_t = typename std::vector<T>::const_iterator;
Iter_t current;
Iter_t end;
VecRange(const std::vector<T>& v)
: current{v.cbegin()}
, end{v.cend()}
{}
VecRange(Iter_t cur, Iter_t fin)
: current{cur}
, end{fin}
{}
static bool exhausted (const VecRange<T>&);
static VecRange<T> next(const VecRange<T>&);
};
template <class T>
bool VecRange<T>::exhausted(const VecRange<T>& range)
{
return range.current == range.end;
}
template <class T>
VecRange<T> VecRange<T>::next(const VecRange<T>& range)
{
if(range.current != range.end)
return VecRange<T>( range.current + 1, range.end );
return range;
}
using ItemsRange = VecRange<Items_t::value_type>;
static void search( const ItemsRange items, int32_t target, Items_t path, Result_t& result)
{
if(ItemsRange::exhausted(items))
{
if(0 == target)
{
result.push_back(path);
}
return;
}
switch(compI32(*items.current,target))
{
case Comp::GT:
return;
case Comp::EQ:
{
path.push_back(*items.current);
result.push_back(path);
}
return;
case Comp::LT:
{
auto path1 = path; // hope this makes a real copy...
path1.push_back(*items.current);
search(ItemsRange::next(items), target - *items.current, path1, result);
search(ItemsRange::next(items), target, path, result);
}
return;
}
}
int main(int argc, const char* argv[])
{
Items_t items{ 2, 10, 13, 17, 22, 42 };
Result_t result;
int32_t target = 52;
std::cout << "Input: " << items << std::endl;
std::cout << "Target: " << target << std::endl;
search(ItemsRange{items}, target, Items_t{}, result);
std::cout << "Output: " << result << std::endl;
return 0;
}
The code implements the algorithm correctly, except that you did not apply the one-based array logic in your output loop. Change:
for (int j = 0; j < arraySize; j++) {
std::cout << include[j] << " ";
}
to:
for (int j = 0; j < arraySize; j++) {
std::cout << include[j+1] << " ";
}
Depending on how you organised your code, make sure that promising is defined when sos is defined.
See it run on repl.it. Output:
0 1 0 0 0 1
0 0 1 1 1 0
The algorithm works fine: the second and third argument to the sos function act as a window in which the running sum should stay, and the promising function verifies against this window. Any value outside this window will be either to small (even if all remaining values were added to it, it will still be less than the target value), or too great (already overrunning the target). These two constraints are explained in the beginning of chapter 5.4 in the book.
At each index there are two possible choices: either include the value in the sum, or don't. The value at includes[i+1] represents this choice, and both are attempted. When there is a match deep down such recursing attempt, all these choices (0 or 1) will be output. Otherwise they are just ignored and switched to the opposite choice in a second attempt.
The following code compiles and executes correctly, but everytime I run it, my R session gets a fatal error shortly after it finishes. I'm running R version 3.3.2 and Rtools 3.3.
Is there anything I missed? How can I trace what's causing the crash?
#include<Rcpp.h>
using namespace Rcpp;
NumericMatrix dupCheckRcpp(NumericMatrix x) {
int nrow, ncol;
int i, j, k, m, n;
bool flag;
NumericMatrix dupMat(300,ncol);
n = 0;
nrow = 0; ncol = 0;
nrow = x.nrow();
ncol = x.ncol();
for (i = 0; i < nrow - 1 ; ++i) {
for (j = i + 1; j < nrow; ++j) {
flag = TRUE;
for (k = 0; k < ncol; ++k) {
if (x(i,k) != x(j,k)) {
flag = FALSE;
break;
}
}
if (flag == TRUE) {
for (m = 0; m < ncol; ++m) {
dupMat(n,m) = x(i,m);
}
n = n + 1;
}
}
}
return dupMat;
}
There are a few issues that are problematic with your code. We begin by looking at how the result matrix is defined, use of bool, and then detail issues with undefined behavior (UB) as a result of the matrix subset.
The definition of:
NumericMatrix dupMat(300, ncol);
has two issues:
It is placed before ncol has been initialized
Assumes the x matrix has nrow fixed at 300.
Move the instantiation of the dupMat till after ncol and nrow are initialized. Alternatively, move it until after you know the amount of duplicate rows.
nrow = x.nrow();
ncol = x.ncol();
Rcpp::NumericMatrix dupMat(nrow, ncol);
In addition, bool values within C++ are written in lower case.
That is, use true in place of TRUE and false in place of FALSE while setting the values of flag variable.
There are three ways to access individual elements in a NumericMatrix, however, we'll only focus on two of them that use i,j indices.
(i,j): Accessing elements in this manner forgoes a bounds check and subsequent exception flag that warns if the point is not within range. In essence, this access method was causing a UB since n = n + 1 could easily go beyond the row index. Probably the UB caused havoc at a later point when RStudio or R ran a background task causing the crash to happen.
.at(i,j): This is the preferred method as it provides a bounds check and throws a nifty exception e.g.
Error in dupCheckRcpp(a) : index out of bounds
Which is triggered by the following code snippet:
if (flag == true) {
for (m = 0; m < ncol; ++m) {
Rcpp::Rcout << "dupMat (" << n << ","<< m << ")" << std::endl <<
"x (" << i << ","<< m << ")" << std::endl;
dupMat.at(n, m) = x.at(i, m);
}
n = n + 1; // able to exceed nrow.
}
The main reason for n = n + 1 to be hitting the upper bound is due to the placement being within the second for loop that gets re-instantiated each time.
Without knowing your intent behind the duplicate check, outside of guessing that it is checking for duplication that may be present within matrix rows. I'm going to stop here.
Is there a good and fast way in C/C++ to test if multiple variables contains either all positive or all negative values?
Say there a 5 variables to test:
Variant 1
int test(int a[5]) {
if (a[0] < 0 && a[1] < 0 && a[2] < 0 && a[3] < 0 && a[4] < 0) {
return -1;
} else if (a[0] > 0 && a[1] > 0 && a[2] > 0 && a[3] > 0 && a[4] > 0) {
return 1;
} else {
return 0;
}
}
Variant 2
int test(int a[5]) {
unsigned int mask = 0;
mask |= (a[0] >> numeric_limits<int>::digits) << 1;
mask |= (a[1] >> numeric_limits<int>::digits) << 2;
mask |= (a[2] >> numeric_limits<int>::digits) << 3;
mask |= (a[3] >> numeric_limits<int>::digits) << 4;
mask |= (a[4] >> numeric_limits<int>::digits) << 5;
if (mask == 0) {
return 1;
} else if (mask == (1 << 5) - 1) {
return -1;
} else {
return 0;
}
}
Variant 2a
int test(int a[5]) {
unsigned int mask = 0;
for (int i = 0; i < 5; i++) {
mask <<= 1;
mask |= a[i] >> numeric_limits<int>::digits;
}
if (mask == 0) {
return 1;
} else if (mask == (1 << 5) - 1) {
return -1;
} else {
return 0;
}
}
What Version should I prefer? Is there any adavantage using variant 2/2a over 1? Or is there a better/faster/cleaner way?
I think your question and what you're looking for don't agree. You asked how to detect if they're signed or unsigned, but it looks like you mean how to test if they're positive or negative.
A quick test for all negative:
if ((a[0]&a[1]&a[2]&a[3]&a[4])<0)
and all non-negative (>=0):
if ((a[0]|a[1]|a[2]|a[3]|a[4])>=0)
I can't think of a good way to test that they're all strictly positive (not zero) right off, but there should be one.
Note that these tests are correct and portable for twos complement systems (anything in the real world you would care about), but they're slightly wrong for ones complement or sign-magnitude. They might can be fixed if you really care.
I guess you mean negative/positive, (un)signed means whether a sign exists at all. This one works for any iterable (this assumes you count 0 as positive):
template <class T>
bool allpos(const T start, const T end) {
T it;
for (it = start; it != end; it++) {
if (*it < 0) return false;
}
return true;
}
// usage
int a[5] = {-5, 3, 1, 0, 4};
bool ispos = allpos(a, a + 5);
Note: This is a good and fast way
This may not be the absolutely extremely superduperfastest way to do it, but it certainly is readable and really fast. Optimizing this is just not worth it.
Variant 1 is the only readable one.
However, you could make it nicer using a loop:
int test(int *a, int n) {
int neg = 0;
for(int i = 0; i < n; i++) {
if(a[i] < 0) neg++;
}
if(neg == 0) return 1;
else if(neg == n) return -1;
else return 0;
}
I agree with previous posters that loops are simpler. The following solution combines Nightcracker's template and ThiefMaster's full solution, with early-outing if a sign-change is detected while looping over the variables (early-outing). And it works for floating point values.
template<typename T>
int testConsistentSigns(const T* i_begin, const T* i_end)
{
bool is_positive = !(*i_begin < 0);
for(const T* it = i_begin + 1; it < i_end; ++it)
{
if((*it < 0) && is_positive)
return 0;
}
if(is_positive)
return 1;
return -1;
}
In terms of speed, I suggest you profile each of your example in turn to discover which is the fastest on your particular platform.
In terms of ease of understanding, I'd say that the first example is the most obvious, though that's just my opinion.
Of course, the first version is a maintenance nightmare if you have more than 5 variables. The second and third variants are better for this, but obviously have a limit of 32 variables. To make them fully flexible, I would suggest keeping counters of the number of positive and negative variables, in a loop. After the end of the loop, just check that one or other counter is zero.
First off, create a method\procedure. That'll boost readability by a whole lot (no matter how you implement it, it'll be cleaner then all the options above).
Second, I think that the function:
bool isNeg(int x) { return x < 0;}
s cleaner then using bit masks, so I'll go with option 1, and when it comes to speed, let the compiler work that out for you in such low-level cases.
The final code should look something like:
int test(int a[5]) {
bool allNeg = true;
bool allPos = true;
for (i = 0; i < 5; i++){
if (isNeg(a[i]) allPos = false;
if (isPos(a[i]) allNeg = false;
}
if (allNeg) return -1;
if (allPos) return 1;
return 0;
}
You could find maximum element, if it is negative then all elements are negative:
template<typename T>
bool all_negative( const T* first, const T* last )
{
const T* max_el = std::max_element( first, last );
if ( *max_el < T(0) ) return true;
else return false;
}
You could use boost::minmax_element to find if all elements are negative/positive in one loop:
template<typename T>
int positive_negative( const T* first, const T* last )
{
std::pair<const T*,const T*> min_max_el = boost::minmax_element( first, last );
if ( *min_max_el.second < T(0) ) return -1;
else if ( *min_max_el.first > T(0) ) return 1;
else return 0;
}
If the sequence is non-empty, the function minmax_element performs at most 3 * (last - first - 1) / 2 comparisons.
If you only need to know less/greater than zero one at a time, or can be content with < and >= you can do it easily with find_if like this:
#include <iostream>
template <class Iter>
int all_neg(Iter begin, Iter end)
{
return std::find_if(begin, end, std::bind2nd(std::greater_equal<int>(), 0)) == end;
}
int main()
{
int a1[5] = { 1, 2, 3, 4, 5 };
int a2[5] = { -1, 2, 3, 4, 5 };
int a3[5] = { -1, -2, -3, -4, -5 };
int a4[5] = { 0 };
std::cout << all_neg(a1, a1 + 5) << ":"
<< all_neg(a2, a2 + 5) << ":"
<< all_neg(a3, a3 + 5) << ":"
<< all_neg(a4, a4 + 5) << std::endl;
}
You can also use a more complicated predicate that keeps track of any pos/neg to answer your original question if you really need that level of detail.