How to floor a number using the NTL library (C++) - c++

I am building a C++ program to verify a mathematical conjecture for up to 100 billion iterations. In order to test such high numbers, I cannot use a C++ int, so I am using the NTL library, using the type ZZ as my number type.
My algorithm looks like this:
ZZ generateNthSeq(ZZ n)
{
return floor(n*sqrt(2));
}
I have the two libraries being imported:
#include <cmath>
#include <NTL/ZZ.h>
But obviously this cannot compile because I get the error:
$ g++ deepness*.cpp
deepness.cpp: In function ‘NTL::ZZ generateNthSeq(NTL::ZZ)’:
deepness.cpp:41: error: no matching function for call to ‘floor(NTL::ZZ)’
/usr/include/bits/mathcalls.h:185: note: candidates are: double floor(double)
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/cmath:262: note: long double std::floor(long double)
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/cmath:258: note: float std::floor(float)
Stating that the floor mathematical operation cannot accept a ZZ class type. But I need the numbers to be pretty big. How can I accomplish what I want to do, which is to floor the function, while using the NTL library?

Note that it doesn't really make sense to apply floor to an integral type (well, it does, it's just a no-op). What you should be really worried about is the fact that your code is apparently passing something of type ZZ into floor!
That is, what can n * sqrt(2) possibly mean here?
Also, before even writing that, I'd've checked the documentation to see if integer * floating point actually exists in the library -- usually for that to be useful at all, you need arbitrary precision floating types available.
Checking through the headers, there is only one multiplication operator:
ZZ operator*(const ZZ& a, const ZZ& b);
and there is a conversion constructor:
explicit ZZ(long a); // promotion constructor
I can't figure out how your code is even compiling. Maybe you're using a different version of the library than I'm looking at, and the conversion constructor is implicit, and your double is getting "promoted" to a ZZ. This is surely not what you want, since promoting sqrt(2) to a ZZ is simply going to give you the integer 1.
You either need to:
look into whether or not NTL has arbitrary precision floating point capabilities
switch to a library that does have arbitrary precision floating point capabilities
convert your calculation to pure integer arithmetic
That last one is fairly easy here: you want
return SqrRoot(sqr(n) * 2); // sqr(n) will be a bit more efficient than `n * n`

Related

Difference in behaviour of pow from math.h for same input [duplicate]

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
int n,i,ele;
n=5;
ele=pow(n,2);
printf("%d",ele);
return 0;
}
The output is 24.
I'm using GNU/GCC in Code::Blocks.
What is happening?
I know the pow function returns a double , but 25 fits an int type so why does this code print a 24 instead of a 25? If n=4; n=6; n=3; n=2; the code works, but with the five it doesn't.
Here is what may be happening here. You should be able to confirm this by looking at your compiler's implementation of the pow function:
Assuming you have the correct #include's, (all the previous answers and comments about this are correct -- don't take the #include files for granted), the prototype for the standard pow function is this:
double pow(double, double);
and you're calling pow like this:
pow(5,2);
The pow function goes through an algorithm (probably using logarithms), thus uses floating point functions and values to compute the power value.
The pow function does not go through a naive "multiply the value of x a total of n times", since it has to also compute pow using fractional exponents, and you can't compute fractional powers that way.
So more than likely, the computation of pow using the parameters 5 and 2 resulted in a slight rounding error. When you assigned to an int, you truncated the fractional value, thus yielding 24.
If you are using integers, you might as well write your own "intpow" or similar function that simply multiplies the value the requisite number of times. The benefits of this are:
You won't get into the situation where you may get subtle rounding errors using pow.
Your intpow function will more than likely run faster than an equivalent call to pow.
You want int result from a function meant for doubles.
You should perhaps use
ele=(int)(0.5 + pow(n,2));
/* ^ ^ */
/* casting and rounding */
Floating-point arithmetic is not exact.
Although small values can be added and subtracted exactly, the pow() function normally works by multiplying logarithms, so even if the inputs are both exact, the result is not. Assigning to int always truncates, so if the inexactness is negative, you'll get 24 rather than 25.
The moral of this story is to use integer operations on integers, and be suspicious of <math.h> functions when the actual arguments are to be promoted or truncated. It's unfortunate that GCC doesn't warn unless you add -Wfloat-conversion (it's not in -Wall -Wextra, probably because there are many cases where such conversion is anticipated and wanted).
For integer powers, it's always safer and faster to use multiplication (division if negative) rather than pow() - reserve the latter for where it's needed! Do be aware of the risk of overflow, though.
When you use pow with variables, its result is double. Assigning to an int truncates it.
So you can avoid this error by assigning result of pow to double or float variable.
So basically
It translates to exp(log(x) * y) which will produce a result that isn't precisely the same as x^y - just a near approximation as a floating point value,. So for example 5^2 will become 24.9999996 or 25.00002

C++ pow unusual type conversion

When I directly output std::pow(10,2), I get 100 while doing (long)(pow(10,2)) gives 99. Can someone explained this please ?
cout<<pow(10,2)<<endl;
cout<<(long)(pow(10,2))<<endl;
The code is basically this in the main function.
The compiler is mingw32-g++.exe -std=c++11 using CodeBlocks
Windows 8.1 if that helps
Floating point numbers are approximations. Occasionally you get a number that can be exactly represented, but don't count on it. 100 should be representable, but in this case it isn't. Something injected an approximation and ruined it for everybody.
When converting from a floating point type to an integer, the integer cannot hold any fractional values so they are unceremoniously dropped. There is no implicit rounding off, the fraction is discarded. 99.9 converts to 99. 99 with a million 9s after it is 99.
So before converting from a floating point type to an integer, round the number, then convert. Unless discarding the fraction is what you want to do.
cout, and most output routines, politely and silently round floating point values before printing, so if there is a bit of an approximation the user isn't bothered with it.
This inexactness is also why you shouldn't directly compare floating point values. X probably isn't exactly pi, but it might be close enough for your computations, so you perform the comparison with an epsilon, a fudge factor, to tell if you are close enough.
What I find amusing, and burned a lot of time trying to sort out, is would not have even seen this problem if not for using namespace std;.
(long)pow(10,2) provides the expected result of 100. (long)std::pow(10,2) does not. Some difference in the path from 10,2 to 100 taken by pow and std::pow results in slightly different results. By pulling the entire std namespace into their file, OP accidentally shot themselves in the foot.
Why is that?
Up at the top of the file we have using namespace std; this means the compiler is not just considering double pow(double, double) when looking for pow overloads, it can also call std::pow and std::pow is a nifty little template making sure that when called with datatypes other than float and double the right conversions are taking place and everything is the same type.
(long)(pow(10,2))
Does not match
double pow(double, double)
as well as it matches a template instantiation of
double std::pow(int, int)
Which, near as I can tell resolves down to
return pow(double(10), double(2));
after some template voodoo.
What the difference between
pow(double(10), double(2))
and
pow(10, 2)
with an implied conversion from int to double on the call to pow is, I do not know. Call in the language lawyers because it's something subtle.
If this is purely a rounding issue then
auto tempa = std::pow(10, 2);
should be vulnerable because tempa should be exactly what std::pow returns
cout << tempa << endl;
cout << (long) tempa << endl;
and the output should be
100
99
I get
100
100
So immediately casting the return of std::pow(10, 2) into a long is different from storing and then casting. Weird. auto tempa is not exactly what std::pow returns or there is something else going on that is too deep for me.
These are the std::pow overloads:
float pow( float base, float exp );
double pow( double base, double exp );
long double pow( long double base, long double exp );
float pow( float base, int iexp );//(until C++11)
double pow( double base, int iexp );//(until C++11)
long double pow( long double base, int iexp ); //(until C++11)
Promoted pow( Arithmetic1 base, Arithmetic2 exp ); //(since C++11)
But your strange behaviour is MINGW's weirdness about double storage and how the windows run-time doesnt like it. I'm assuming windows is seeing something like 99.9999 and when that is cast to an integral type it takes the floor.
int a = 3/2; // a is = 1
mingw uses the Microsoft C run-time libraries and their implementation of printf does not support the 'long double' type. As a work-around, you could cast to 'double' and pass that to printf instead.
Therefore, you need double double:
On the x86 architecture, most C compilers implement long double as the 80-bit extended precision type supported by x86 hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment), as specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). An exception is Microsoft Visual C++ for x86, which makes long double a synonym for double.[2] The Intel C++ compiler on Microsoft Windows supports extended precision, but requires the /Qlong‑double switch for long double to correspond to the hardware's extended precision format.[3]

Should I always use the appropriate literals for number types?

I'm often using the wrong literals in expressions, e.g. dividing a float by an int, like this:
float f = read_f();
float g = f / 2;
I believe that the compiler will in this case first convert the int literal (2) to float, and then apply the division operator. GCC and Clang have always let stuff like that pass, but Visual C++ warns about an implicit conversion. So I have to write it like this:
float f = read_f();
float g = f / 2.0f;
That got me wondering: Should I always use the appropriate literals for float, double, long etc.? I normally use int literals whenever I can get away with it, but I'm not sure if that's actually a good idea.
Is this a likely cause of subtle errors?
Is this only an issue for expressions or also for function parameters?
Are there warning levels for GCC or Clang that warn about such implicit conversions?
How about unsigned int, long int etc?
You should always explicitly indicate the type of literal that you intend to use. This will prevent problems when for example this sort of code:
float foo = 9.0f;
float bar = foo / 2;
changes to the following, truncating the result:
int foo = 9;
float bar = foo / 2;
It's a concern with function parameters as well when you have overloading and templates involved.
I know gcc has -Wconversion but I can't recall everything that it covers.
For integer values that fit in int I usually don't qualify those for long or unsigned as there is usually much less chance there for subtle bugs.
There's pretty much never an absolutely correct answer to a "should" question. Who's going to use this code, and for what? That's relevant here. But also, particularly for anything to do with floats, it's good to get into the habit of specifying exactly the operations you require. float*float is done in single-precision. anything with a double is done double-precision, 2 gets converted to a double so you're specifying different operations here.
The best answer here is What Every Computer Scientist Should Know About Floating-Point Arithmetic. I'd say don't tl;dr it, there are no simple answers with floating point.

Compiler error: unmatched call to pow(...)

My company has a piece of software, sporting a rather large codebase. Recently I was assigned the task of checking wether the code would compile on an x86_64 target using gcc 4.1.2. I've gotten pretty far in the compilation with very minor modifications to the code but just this morning I got a somewhat confusing compile error.
The code is trying, and failing, to call powfrom <cmath> using int, unsigned int& as parameters. The compiler spits out an error because it can't find a suitable match to call. The overloads for pow in <cmath> are as follows:
double pow(double base, double exponent)
long double pow(long double base, long double exponent)
float pow(float base, float exponent)
double pow(double base, int exponent)
long double pow(long double base, int exponent)
I'm not quite shure as to why this builds on our 32-bit environments but that's beside the point right now.
My question is: how should I cast the parameters, which pow should I use? Thanks.
P.S. I can't change the datatype of the parameters as doing so would require too much work. My assignment is to get the code to compile, detailing any hacks I make so that later, we can go over those hacks and find proper ways do deal with them.
If you are making many calls to pow(int, unsigned int) why don't you just code it by yourself? If execution speed is not an issue, it's not much work.
Otherwise, I'd use a pow() overload whose input parameters are guaranteed to contain your expected values, such as pow(float, float) or pow(double, double). Anyway, I feel that making your own version could prevent problems with conversion between floating point and integer.
The result will always be integer, with these types of arguments.
Depending on the expected range of the arguments, especially the exponent, you should choose for the float or double or long double version.
So that would become
pow( (float) i, (int)ui );
You can find the allowed range of arguments by solving the equation pow(i,ui) < max_double.

strange double to int conversion behavior in c++

The following program shows the weird double to int conversion behavior I'm seeing in c++:
#include <stdlib.h>
#include <stdio.h>
int main() {
double d = 33222.221;
printf("d = %9.9g\n",d);
d *= 1000;
int i = (int)d;
printf("d = %9.9g | i = %d\n",d,i);
return 0;
}
When I compile and run the program, I see:
g++ test.cpp
./a.out
d = 33222.221
d = 33222221 | i = 33222220
Why is i not equal to 33222221?
The compiler version is GCC 4.3.0
Floating point representation is almost never precise (only in special cases). Every programmer should read this: What Every Computer Scientist Should Know About Floating-Point Arithmetic
In short - your number is probably 33222220.99999999999999999999999999999999999999999999999999999999999999998 (or something like that), which becomes 33222220 after truncation.
When you attach a debugger and inspect the values, you will see that the value of d is actually 33222220.999999996, which is correctly truncated to 33222220 when converted to integer.
There is a finite amount of numbers that can be stored in a double variable, and 33222221 is not one of them.
Due to floating point approximation, 33222.221 may actually be 33222.220999999999999. Multiplied by 1000 yields 33222220.999999999999. Casting to integer ignores all decimals (round down) for a final result of 33222220.
If you change the "9.9g" in your printf() calls to 17.17 to recover all possible digits of precision with a 64-bit IEEE 754 FP number, you get 33222220.999999996 for the double value. The int conversion then makes sense.
I don't want to repeat the explanations of the other comments.
So, here is just an advice to avoid problems like the one described:
Avoid floating point arithemtics in the first place whereever possible (especially when computation is involved).
If floating point arithmetics is really necessary, you must not compare numbers by operator== by all means! Use your own comparison function instead (or use one supplied by some library), which does something like an "is almost equal" comparison using some kind of epsilon compare (either absolute or relative to the number's magniture).
See for example the excellent article
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
by Bruce Dawson instead!
Stefan