Largest Number < x? - c++

In C++, let's say I have a number x of type T which can be an integer or floating point type. I want to find the largest number y of type T for which y < x holds. The solution needs to be templated to work transparently with both integers and floating point numbers. You may ignore the edge case where x is already the smallest number that can be represented in a T.
POSSIBLE USE CASE: This question was marked as too localized, hence I would like to provide a use case which I think is more general. Note that I'm not the original author of the OP.
Consider this structure:
struct lower_bound {
lower_bound(double value, bool open) : value(open? value+0.1 : value) {}
double value;
bool operator()(double x) { return x >= value; }
};
This class simulates an lower bound which can either be open or closed. Of course, in real (pun intended) life we can not do this. The flowing is impossible (or at least quite tricky) to calculate for S being all real numbers.
However, when S is the set of floating point numbers, this is a very valid principle, since we are dealing with essentially a countable set; and then there is no such thing as an open or closed bound. That is, >= can be defined in terms of > like done in the lower_bound class.
For code simplicity I used +0.1 to simulate an open lower bound. Of course, 0.1 is a crude value as there may be values z such that value < z <= value+0.1 or value+0.1 == value in a floating point representation. Hence #brett-hale answer is very useful :)
You may think about another simpler solution:
struct lower_bound {
lower_bound(double value, bool open) : open(open), value(value) {}
bool open;
double value;
bool operator()(double x) { return (open ? x > value : x>=value); }
};
However, this is less efficient as the sizeof(Lower_bound) is larger, and operator() needs to execute a more complicated statement. The first implementation is really efficient, and can also be implemented simply as a double, instead of a structure. Technically, the only reason to use the second implementation is because you assume a double is continuous, whereas it is not and I guess it will not be anywhere in the foreseeable future.
I hope I have created and explained a valid use case, and that I have not offended the original author.

If you have C++11, you could use std::nextafter in <cmath> :
if (std::is_integral<T>::value)
return (x - 1);
else
return std::nextafter(x, - std::numeric_limits<T>::infinity());

Related

C++: Create integer vector of infinities

I'm working on an algorithm and I need to initialize the vector of ints:
std::vector<int> subs(10)
of fixed length with values:
{-inf, +inf, +inf …. }
This is where I read that it is possible to use MAX_INT, but it's not quiete correct because the elements of my vector are supposed to be greater than any possible int value.
I liked overrloading comparison operator method from this answer, but how do you initialize the vector with infinitytype class objects if there are supposed to be an int?
Or maybe you know any better solution?
Thank you.
The solution depends on the assumptions your algorithm (or the implementation of your algorithm) has:
You could increase the element size beyond int (e.g. if your sizeof(int) is 4, use int64_t), and initialize to (int64_t) 1 + std::numeric_limits<int>:max() (and similarly for the negative values). But perhaps your algorithm assumes that you can't "exceed infinity" by adding on multiplying by positive numbers?
You could use an std::variant like other answers suggest, selecting between an int and infinity; but perhaps your algorithm assumes your elements behave like numbers?
You could use a ratio-based "number" class, ensuring it will not get non-integral values except infinity.
You could have your algorithm special-case the maximum and minimum integers
You could use floats or doubles which support -/+ infinity, and restrict them to integrality.
etc.
So, again, it really just depends and there's no one-size-fits-all solution.
AS already said in the comments, you can't have an infinity value stored in int: all values of this type are well-defined and finite.
If you are ok with a vector of something working as an infinite for ints, then consider using a type like this:
struct infinite
{ };
bool operator < (int, infinite)
{
return true;
}
You can use a variant (for example, boost::variant) which supports double dispatching, which stores either an int or an infinitytype (which should store the sign of the infinity, for example in a bool), then implement the comparison operators through a visitor.
But I think it would be simpler if you simply used a double instead of int, and whenever you take out a value that is not infinity, convert it to int. If performance is not that great of an issue, then it will work fine (probably still faster than a variant). If you need great performance, then just use MAX_INT and be done with it.
You are already aware of the idea of an "infinite" type, but that implementation could only contain infinite values. There's another related idea:
struct extended_int {
enum {NEGINF, FINITE, POSINF} type;
int finiteValue; // Only meaningful when type==FINITE
bool operator<(extended_int rhs) {
if (this->type==POSINF) return false;
if (rhs.type==NEGINF) return false;
if (this->type==FINITE && rhs.type==POSINF) return false;
if (this->type==NEGINF && rhs.type==FINITE) return false;
assert(this->type==FINITE && rhs.type==FINITE);
return this->finiteValue < rhs.finiteValue)
}
// Implicitly converting ctor
constexpr extended_int(int value) : type(FINITE), finiteValue(value) { }
// And the two infinities
static constexpr extended_int posinf;
static constexpr extended_int neginf;
}
You now have extended_int(5) < extended_int(6) but also extended_int(5) < extended_int::posinf

Why true statement false?

I have a class , vector which inherited from a "raw vector"
struct vector2raw {
real_t x, y;
};
struct vector2 : public vector2raw {
vector2() { null(); }
vector2(real_t x, real_t y) { this->x = x; this->y = y; }
vector2(const vector2 &v) { x = v.x; y = v.y; }
and so on
Now I want to compare two numbers, one v.y=4 from v = (5.41, 4), another min.y = 4 from min=(4,4).This is only the strange case when I compare two equal numbers, other cases are executed correctly. I get always false on (4>=4) ( v.y>=min.y) . What can be the problem?
real_t is defined to double
UPD: this is written in C++
Apparently (you're not giving a reproducible example) you're comparing floating point numbers with ==.
That's an ungood idea unless those numbers happen to be integral values, and for beginners it's an ungood idea in general.
Two floating point values can appear to be equal, e.g. they give the same presentation when you don't request presentation of additional decimals, while in reality they differ in some otherwise very insignificant digit.
In the old days beginners who encountered this problem used to be referred to “What every scientist should know about floating point numbers” (or thereabouts, title from fallible memory).
In the last few years I have been criticized for giving that reference, because real technical stuff and so on is, allegedly, too hard for today's students. And people have suggested more easy to digest alternatives, sort of like Wikipedia's simple edition. However, I can't remember any of them.

C++ function to tell whether a given function is injective

This might seem like a weird question, but how would I create a C++ function that tells whether a given C++ function that takes as a parameter a variable of type X and returns a variable of type X, is injective in the space of machine representation of those variables, i.e. never returns the same variable for two different variables passed to it?
(For those of you who weren't Math majors, maybe check out this page if you're still confused about the definition of injective: http://en.wikipedia.org/wiki/Injective_function)
For instance, the function
double square(double x) { return x*x};
is not injective since square(2.0) = square(-2.0),
but the function
double cube(double x) { return x*x*x};
is, obviously.
The goal is to create a function
template <typename T>
bool is_injective(T(*foo)(T))
{
/* Create a set std::set<T> retVals;
For each element x of type T:
if x is in retVals, return false;
if x is not in retVals, add it to retVals;
Return true if we made it through the above loop.
*/
}
I think I can implement that procedure except that I'm not sure how to iterate through every element of type T. How do I accomplish that?
Also, what problems might arise in trying to create such a function?
You need to test every possible bit pattern of length sizeof(T).
There was a widely circulated blog post about this topic recently: There are Only Four Billion Floats - So Test Them All!
In that post, the author was able to test all 32-bit floats in 90 seconds. Turns out that would take a few centuries for 64-bit values.
So this is only possible with small input types.
Multiple inputs, structs, or anything with pointers are going to get impossible fast.
BTW, even with 32-bit values you will probably exhaust system memory trying to store all the output values in a std::set, because std::set uses a lot of extra memory for pointers. Instead, you should use a bitmap that's big enough to hold all 2^sizeof(T) output values. The specialized std::vector<bool> should work. That will take 2^sizeof(T) / 8 bytes of memory.
Maybe what you need is std::numeric_limits. To store the results, you may use an unordered_map (from std if you're using C++11, or from boost if you're not).
You can check the limits of the data types, maybe something like this might work (it's a dumb solution, but it may get you started):
template <typename T>
bool is_injective(T(*foo)(T))
{
std::unordered_map<T, T> hash_table;
T min = std::numeric_limits<T>::min();
T max = std::numeric_limits<T>::max();
for(T it = min; i < max; ++i)
{
auto result = hash_table.emplace(it, foo(it));
if(result.second == false)
{
return false;
}
}
return true;
}
Of course, you may want to restrict a few of the possible data types. Otherwise, if you check for floats, doubles or long integers, it'll get very intensive.
but the function
double cube(double x) { return x*x*x};
is, obviously.
It is obviously not. There are 2^53 more double values representable in [0..0.5) than in [0..0.125).
As far as I know, you cannot iterate all possible values of a type in C++.
But, even if you could, that approach would get you nowhere. If your type is a 64 bit integer, you might have to iterate through 2^64 values and keep track of the result for all of them, which is not possible.
Like other people said, there is no solution for a generic type X.

C++ NTL (by Victor Shoup): How to represent infinity

I would like to know how would one represent infinity if there is no built-in function for you to do so.
I know that if we are using float or double, we will be able to use infinity() with #include <limits>. But if I need to use int or in the case of NTL, ZZ, how should I represent infinity? Should I write something new? How is it represented in C++?
Edit: I'm posing this question because I would like to implement an addition algorithm for point on an elliptic curve. So, I'll need infinity to represent the point of infinity. I was wondering if I'll be better off using projective coordinates and have [0:1:0] to represent the point at infinity, but wanted to explore the infinity in int or ZZ option first.
In general, if you are running into infinity on a finite precision machine then you are not addressing the problem at hand correctly with your computational approach. You should either analytically deal with the infinity before hand or find a means to appropriately avoid it in finite precision. For instance, if you had to deal with f(x)=sin(x)/x you probably wouldn't want to let your code evaluate this in finite precision at x = 0. Instead you would want to check if x is 0 and then return f(0) = 1.0.
What about just a symbolic representation such that it "acts", in a general sense, as infinity would?
You can certainly do something like that. For most computational problems that wont get you anywhere useful. A simple way to approach that would be to create your own data types and override all of the operators to handle infinity appropriately. Not all infinities are the same though so you would need to deal with that issue. For example, you might define a customized float to be something like
class MyFloat
{
public:
MyFloat(float a):
m_val(a),
m_isInf(false),
m_orderInf(0)
{}
bool isInf(){return m_isInf;}
int orderInf(){return m_orderInf;}
float value(){return m_val;}
// define custom operators
MyFloat & operator+= (MyFloat const & rhs)
{
if(rhs.isInf() || m_isInf)
{
m_orderInf = m_orderInf > rhs.orderInf() ? m_orderInf : rhs.orderInf();
m_isInf = true;
}
else
{
m_val += rhs.value();
}
return *this;
}
// other operators you would need to define
MyFloat & operator/= (MyFloat const & rhs);
MyFloat & operator*= (MyFloat const & rhs);
private:
float m_val;
bool m_isInf;
int m_orderInf;
};
NOTE: You will need to give a lot of thought as to how to treat both zeros and infinities. The above code is not well thought out, but I hope it gives you something to think about.

Floating point keys in std:map

The following code is supposed to find the key 3.0in a std::map which exists. But due to floating point precision it won't be found.
map<double, double> mymap;
mymap[3.0] = 1.0;
double t = 0.0;
for(int i = 0; i < 31; i++)
{
t += 0.1;
bool contains = (mymap.count(t) > 0);
}
In the above example, contains will always be false.
My current workaround is just multiply t by 0.1 instead of adding 0.1, like this:
for(int i = 0; i < 31; i++)
{
t = 0.1 * i;
bool contains = (mymap.count(t) > 0);
}
Now the question:
Is there a way to introduce a fuzzyCompare to the std::map if I use double keys?
The common solution for floating point number comparison is usually something like a-b < epsilon. But I don't see a straightforward way to do this with std::map.
Do I really have to encapsulate the double type in a class and overwrite operator<(...) to implement this functionality?
So there are a few issues with using doubles as keys in a std::map.
First, NaN, which compares less than itself is a problem. If there is any chance of NaN being inserted, use this:
struct safe_double_less {
bool operator()(double left, double right) const {
bool leftNaN = std::isnan(left);
bool rightNaN = std::isnan(right);
if (leftNaN != rightNaN)
return leftNaN<rightNaN;
return left<right;
}
};
but that may be overly paranoid. Do not, I repeat do not, include an epsilon threshold in your comparison operator you pass to a std::set or the like: this will violate the ordering requirements of the container, and result in unpredictable undefined behavior.
(I placed NaN as greater than all doubles, including +inf, in my ordering, for no good reason. Less than all doubles would also work).
So either use the default operator<, or the above safe_double_less, or something similar.
Next, I would advise using a std::multimap or std::multiset, because you should be expecting multiple values for each lookup. You might as well make content management an everyday thing, instead of a corner case, to increase the test coverage of your code. (I would rarely recommend these containers) Plus this blocks operator[], which is not advised to be used when you are using floating point keys.
The point where you want to use an epsilon is when you query the container. Instead of using the direct interface, create a helper function like this:
// works on both `const` and non-`const` associative containers:
template<class Container>
auto my_equal_range( Container&& container, double target, double epsilon = 0.00001 )
-> decltype( container.equal_range(target) )
{
auto lower = container.lower_bound( target-epsilon );
auto upper = container.upper_bound( target+epsilon );
return std::make_pair(lower, upper);
}
which works on both std::map and std::set (and multi versions).
(In a more modern code base, I'd expect a range<?> object that is a better thing to return from an equal_range function. But for now, I'll make it compatible with equal_range).
This finds a range of things whose keys are "sufficiently close" to the one you are asking for, while the container maintains its ordering guarantees internally and doesn't execute undefined behavior.
To test for existence of a key, do this:
template<typename Container>
bool key_exists( Container const& container, double target, double epsilon = 0.00001 ) {
auto range = my_equal_range(container, target, epsilon);
return range.first != range.second;
}
and if you want to delete/replace entries, you should deal with the possibility that there might be more than one entry hit.
The shorter answer is "don't use floating point values as keys for std::set and std::map", because it is a bit of a hassle.
If you do use floating point keys for std::set or std::map, almost certainly never do a .find or a [] on them, as that is highly highly likely to be a source of bugs. You can use it for an automatically sorted collection of stuff, so long as exact order doesn't matter (ie, that one particular 1.0 is ahead or behind or exactly on the same spot as another 1.0). Even then, I'd go with a multimap/multiset, as relying on collisions or lack thereof is not something I'd rely upon.
Reasoning about the exact value of IEEE floating point values is difficult, and fragility of code relying on it is common.
Here's a simplified example of how using soft-compare (aka epsilon or almost equal) can lead to problems.
Let epsilon = 2 for simplicity. Put 1 and 4 into your map. It now might look like this:
1
\
4
So 1 is the tree root.
Now put in the numbers 2, 3, 4 in that order. Each will replace the root, because it compares equal to it. So then you have
4
\
4
which is already broken. (Assume no attempt to rebalance the tree is made.) We can keep going with 5, 6, 7:
7
\
4
and this is even more broken, because now if we ask whether 4 is in there, it will say "no", and if we ask for an iterator for values less than 7, it won't include 4.
Though I must say that I've used maps based on this flawed fuzzy compare operator numerous times in the past, and whenever I digged up a bug, it was never due to this. This is because datasets in my application areas never actually amount to stress-testing this problem.
As Naszta says, you can implement your own comparison function. What he leaves out is the key to making it work - you must make sure that the function always returns false for any values that are within your tolerance for equivalence.
return (abs(left - right) > epsilon) && (left < right);
Edit: as pointed out in many comments to this answer and others, there is a possibility for this to turn out badly if the values you feed it are arbitrarily distributed, because you can't guarantee that !(a<b) and !(b<c) results in !(a<c). This would not be a problem in the question as asked, because the numbers in question are clustered around 0.1 increments; as long as your epsilon is large enough to account for all possible rounding errors but is less than 0.05, it will be reliable. It is vitally important that the keys to the map are never closer than 2*epsilon apart.
You could implement own compare function.
#include <functional>
class own_double_less : public std::binary_function<double,double,bool>
{
public:
own_double_less( double arg_ = 1e-7 ) : epsilon(arg_) {}
bool operator()( const double &left, const double &right ) const
{
// you can choose other way to make decision
// (The original version is: return left < right;)
return (abs(left - right) > epsilon) && (left < right);
}
double epsilon;
};
// your map:
map<double,double,own_double_less> mymap;
Updated: see Item 40 in Effective STL!
Updated based on suggestions.
Using doubles as keys is not useful. As soon as you make any arithmetic on the keys you are not sure what exact values they have and hence cannot use them for indexing the map. The only sensible usage would be that the keys are constant.