C++ NTL (by Victor Shoup): How to represent infinity - c++

I would like to know how would one represent infinity if there is no built-in function for you to do so.
I know that if we are using float or double, we will be able to use infinity() with #include <limits>. But if I need to use int or in the case of NTL, ZZ, how should I represent infinity? Should I write something new? How is it represented in C++?
Edit: I'm posing this question because I would like to implement an addition algorithm for point on an elliptic curve. So, I'll need infinity to represent the point of infinity. I was wondering if I'll be better off using projective coordinates and have [0:1:0] to represent the point at infinity, but wanted to explore the infinity in int or ZZ option first.

In general, if you are running into infinity on a finite precision machine then you are not addressing the problem at hand correctly with your computational approach. You should either analytically deal with the infinity before hand or find a means to appropriately avoid it in finite precision. For instance, if you had to deal with f(x)=sin(x)/x you probably wouldn't want to let your code evaluate this in finite precision at x = 0. Instead you would want to check if x is 0 and then return f(0) = 1.0.

What about just a symbolic representation such that it "acts", in a general sense, as infinity would?
You can certainly do something like that. For most computational problems that wont get you anywhere useful. A simple way to approach that would be to create your own data types and override all of the operators to handle infinity appropriately. Not all infinities are the same though so you would need to deal with that issue. For example, you might define a customized float to be something like
class MyFloat
{
public:
MyFloat(float a):
m_val(a),
m_isInf(false),
m_orderInf(0)
{}
bool isInf(){return m_isInf;}
int orderInf(){return m_orderInf;}
float value(){return m_val;}
// define custom operators
MyFloat & operator+= (MyFloat const & rhs)
{
if(rhs.isInf() || m_isInf)
{
m_orderInf = m_orderInf > rhs.orderInf() ? m_orderInf : rhs.orderInf();
m_isInf = true;
}
else
{
m_val += rhs.value();
}
return *this;
}
// other operators you would need to define
MyFloat & operator/= (MyFloat const & rhs);
MyFloat & operator*= (MyFloat const & rhs);
private:
float m_val;
bool m_isInf;
int m_orderInf;
};
NOTE: You will need to give a lot of thought as to how to treat both zeros and infinities. The above code is not well thought out, but I hope it gives you something to think about.

Related

Overloading struct operator loses precision when multiplying doubles

I have a Point3d class for which I created a scalar multiplication operator like so:
Point3d operator*(double mul) { return {x*mul, y*mul, z*mul}; }
This does not seem to multiply doubles correctly. In my testing when I multiply a Point3d object by 10^-6 and then check the x coordinate, I get .000002.
But, when multiplying the components by hand I get 2.1584e-06.
The former definitely only has 1 digit of precision because if the operator is too small the Point3d goes to 0.0. Why is this happening?
If you write them both out without an exponent, you see:
.000002
.0000021584
That means they're equal, to the point where the former stops. So it's almost certainly the printing of the value that's giving you your difference. Try it with something like:
#include <cstdio>
:
printf("%.20f\n", myDouble); // I can't be bothered looking up the iomanp stuff :-)
This should show you more than the default number of digits.

C++: Create integer vector of infinities

I'm working on an algorithm and I need to initialize the vector of ints:
std::vector<int> subs(10)
of fixed length with values:
{-inf, +inf, +inf …. }
This is where I read that it is possible to use MAX_INT, but it's not quiete correct because the elements of my vector are supposed to be greater than any possible int value.
I liked overrloading comparison operator method from this answer, but how do you initialize the vector with infinitytype class objects if there are supposed to be an int?
Or maybe you know any better solution?
Thank you.
The solution depends on the assumptions your algorithm (or the implementation of your algorithm) has:
You could increase the element size beyond int (e.g. if your sizeof(int) is 4, use int64_t), and initialize to (int64_t) 1 + std::numeric_limits<int>:max() (and similarly for the negative values). But perhaps your algorithm assumes that you can't "exceed infinity" by adding on multiplying by positive numbers?
You could use an std::variant like other answers suggest, selecting between an int and infinity; but perhaps your algorithm assumes your elements behave like numbers?
You could use a ratio-based "number" class, ensuring it will not get non-integral values except infinity.
You could have your algorithm special-case the maximum and minimum integers
You could use floats or doubles which support -/+ infinity, and restrict them to integrality.
etc.
So, again, it really just depends and there's no one-size-fits-all solution.
AS already said in the comments, you can't have an infinity value stored in int: all values of this type are well-defined and finite.
If you are ok with a vector of something working as an infinite for ints, then consider using a type like this:
struct infinite
{ };
bool operator < (int, infinite)
{
return true;
}
You can use a variant (for example, boost::variant) which supports double dispatching, which stores either an int or an infinitytype (which should store the sign of the infinity, for example in a bool), then implement the comparison operators through a visitor.
But I think it would be simpler if you simply used a double instead of int, and whenever you take out a value that is not infinity, convert it to int. If performance is not that great of an issue, then it will work fine (probably still faster than a variant). If you need great performance, then just use MAX_INT and be done with it.
You are already aware of the idea of an "infinite" type, but that implementation could only contain infinite values. There's another related idea:
struct extended_int {
enum {NEGINF, FINITE, POSINF} type;
int finiteValue; // Only meaningful when type==FINITE
bool operator<(extended_int rhs) {
if (this->type==POSINF) return false;
if (rhs.type==NEGINF) return false;
if (this->type==FINITE && rhs.type==POSINF) return false;
if (this->type==NEGINF && rhs.type==FINITE) return false;
assert(this->type==FINITE && rhs.type==FINITE);
return this->finiteValue < rhs.finiteValue)
}
// Implicitly converting ctor
constexpr extended_int(int value) : type(FINITE), finiteValue(value) { }
// And the two infinities
static constexpr extended_int posinf;
static constexpr extended_int neginf;
}
You now have extended_int(5) < extended_int(6) but also extended_int(5) < extended_int::posinf

Why true statement false?

I have a class , vector which inherited from a "raw vector"
struct vector2raw {
real_t x, y;
};
struct vector2 : public vector2raw {
vector2() { null(); }
vector2(real_t x, real_t y) { this->x = x; this->y = y; }
vector2(const vector2 &v) { x = v.x; y = v.y; }
and so on
Now I want to compare two numbers, one v.y=4 from v = (5.41, 4), another min.y = 4 from min=(4,4).This is only the strange case when I compare two equal numbers, other cases are executed correctly. I get always false on (4>=4) ( v.y>=min.y) . What can be the problem?
real_t is defined to double
UPD: this is written in C++
Apparently (you're not giving a reproducible example) you're comparing floating point numbers with ==.
That's an ungood idea unless those numbers happen to be integral values, and for beginners it's an ungood idea in general.
Two floating point values can appear to be equal, e.g. they give the same presentation when you don't request presentation of additional decimals, while in reality they differ in some otherwise very insignificant digit.
In the old days beginners who encountered this problem used to be referred to “What every scientist should know about floating point numbers” (or thereabouts, title from fallible memory).
In the last few years I have been criticized for giving that reference, because real technical stuff and so on is, allegedly, too hard for today's students. And people have suggested more easy to digest alternatives, sort of like Wikipedia's simple edition. However, I can't remember any of them.

How to override the floating point tolerance in the boost::geometry::equals algorithm?

I am using the boost geometry library to compare two different polygons. Specifically, I am using the equals algorithm to see if two polygons are congruent (equal dimensions).
The problem is that the tolerance on the algorithm is too tight and two polygons that should be congruent (after some floating point operations) are not within the tolerance defined by the algorithm.
I'm almost certain that the library is using std::numeric_limits<double>::epsilon() (~2.22e-16) to establish the tolerance. I would like to set the tolerance to be larger (say 1.0e-10).
Any ideas on how to do this?
EDIT: I've changed the title to reflect the responses in the comments. Please respond to the follow-up below:
Is it possible to override just the boost::geometry::math::detail::equals<Type,true>::apply function?
This way I could replace only the code where the floating point comparison occurs and I wouldn't have to rewrite a majority of the boost::geometry::equals algorithm.
For reference, here is the current code from the boost library:
template <typename Type, bool IsFloatingPoint>
struct equals
{
static inline bool apply(Type const& a, Type const& b)
{
return a == b;
}
};
template <typename Type>
struct equals<Type, true>
{
static inline Type get_max(Type const& a, Type const& b, Type const& c)
{
return (std::max)((std::max)(a, b), c);
}
static inline bool apply(Type const& a, Type const& b)
{
if (a == b)
{
return true;
}
// See http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.17,
// FUTURE: replace by some boost tool or boost::test::close_at_tolerance
return std::abs(a - b) <= std::numeric_limits<Type>::epsilon() * get_max(std::abs(a), std::abs(b), 1.0);
}
};
The mentioned code can be found in boost/geometry/util/math.hpp, currently in Boost 1.56 or older (here on GitHub).
There is a free function boost::geometry::math::equals() calling internally boost::geometry::math::detail::equals<>::apply(). So to change the default behavior you could overload this function or specialize the struct for some coordinate type or types. Have in mind that in some algorithms that type may be promoted to some more precise type.
Of course you could also use your own, non-standard coordinate type and implement required operators or overload the function mentioned above.
But... you might consider describing a specific case when you think that the calculated result is wrong to be sure that this question is not a XY problem. Playing with epsilon might improve the result in some cases but make things worse in other. What if some parts of the algorithm not related to the comparison might be improved? Then it would be helpful if you wrote which version of Boost.Geometry you're using, the compiler, etc.

Largest Number < x?

In C++, let's say I have a number x of type T which can be an integer or floating point type. I want to find the largest number y of type T for which y < x holds. The solution needs to be templated to work transparently with both integers and floating point numbers. You may ignore the edge case where x is already the smallest number that can be represented in a T.
POSSIBLE USE CASE: This question was marked as too localized, hence I would like to provide a use case which I think is more general. Note that I'm not the original author of the OP.
Consider this structure:
struct lower_bound {
lower_bound(double value, bool open) : value(open? value+0.1 : value) {}
double value;
bool operator()(double x) { return x >= value; }
};
This class simulates an lower bound which can either be open or closed. Of course, in real (pun intended) life we can not do this. The flowing is impossible (or at least quite tricky) to calculate for S being all real numbers.
However, when S is the set of floating point numbers, this is a very valid principle, since we are dealing with essentially a countable set; and then there is no such thing as an open or closed bound. That is, >= can be defined in terms of > like done in the lower_bound class.
For code simplicity I used +0.1 to simulate an open lower bound. Of course, 0.1 is a crude value as there may be values z such that value < z <= value+0.1 or value+0.1 == value in a floating point representation. Hence #brett-hale answer is very useful :)
You may think about another simpler solution:
struct lower_bound {
lower_bound(double value, bool open) : open(open), value(value) {}
bool open;
double value;
bool operator()(double x) { return (open ? x > value : x>=value); }
};
However, this is less efficient as the sizeof(Lower_bound) is larger, and operator() needs to execute a more complicated statement. The first implementation is really efficient, and can also be implemented simply as a double, instead of a structure. Technically, the only reason to use the second implementation is because you assume a double is continuous, whereas it is not and I guess it will not be anywhere in the foreseeable future.
I hope I have created and explained a valid use case, and that I have not offended the original author.
If you have C++11, you could use std::nextafter in <cmath> :
if (std::is_integral<T>::value)
return (x - 1);
else
return std::nextafter(x, - std::numeric_limits<T>::infinity());