Why running std::abs over a big complex array is about 8 times slower than using sqrt and norm?
#include <ctime>
#include <cmath>
#include <vector>
#include <complex>
#include <iostream>
using namespace std;
int main()
{
typedef complex<double> compd;
vector<compd> arr(2e7);
for (compd& c : arr)
{
c.real(rand());
c.imag(rand());
}
double sm = 0;
clock_t tm = clock();
for (const compd& c : arr)
{
sm += abs(c);
}
cout << sm << ' ' << clock() - tm << endl; // 5.01554e+011 - 1640 ms
sm = 0;
tm = clock();
for (const compd& c : arr)
{
sm += sqrt(norm(c));
}
cout << sm << ' ' << clock() - tm << endl; // 5.01554e+011 - 154
sm = 0;
tm = clock();
for (const compd& c : arr)
{
sm += hypot(c.real(), c.imag());
}
cout << sm << ' ' << clock() - tm << endl; // 5.01554e+011 - 221
}
I believe the two are not to be taken as identical in the strict sense.
From cppreference on std::abs(std::complex):
Errors and special cases are handled as if the function is implemented as std::hypot(std::real(z), std::imag(z))
Also from cppreference on std::norm(std::complex):
The norm calculated by this function is also known as field norm or absolute square.
The Euclidean norm of a complex number is provided by std::abs, which is more costly to compute. In some situations, it may be replaced by std::norm, for example, if abs(z1) > abs(z2) then norm(z1) > norm(z2).
In short, there are cases where a different result is obtained from each function. Some of these may be found in std::hypot. There the notes also mention the following:
std::hypot(x, y) is equivalent to std::abs(std::complex<double>(x,y))
In general the accuracy of the result may be different (due to the usual floating point mess), and it seems the functions were designed in such a way to be as accurate as possible.
The main reason is that abs handles underflow and overflow during intermediate computations.
So, if norm under/overflows, your formula returns an incorrect/inaccurate result, while abs will return the correct one (so, for example, if your input numbers are in the range of 10200, then the result should be around 10200 as well. But your formula will give you inf, or a floating point exception, because the intermediate norm is around 10400, which is out of range. Note, I've supposed IEEE-754 64-bit floating point here).
Another reason is that abs may give a little bit more precise result.
If you don't need to handle these cases, because your input numbers are "well-behaved" (and don't need the possible more precise result), feel free to use your formula.
Related
I am beginning in C++ and with the "chrono" function, and I'd like to use it to get the speed of a motor.
For that, I have a coding wheel linked to a motor, an optocoupler is used to gather the square signal generated by the coding wheel.
Therefore, my raspberry pi receive a square signal which speed depends on the motor speed.
I used the chrono function to try to calculate the duration of the frequency of the square signal.
I achieved to have the duration of each signal (almost) which is 7ms.
I'd like to simply extract the frequency through the formula 1/F (therefore, 1/0.007 = 142.85).
I've been eating the documentation of the chrono function for a week, and I still don't get it at all...
Apparently, all the answers are here, but I don't understand that, I'm still a beginner in C++ :( https://en.cppreference.com/w/cpp/chrono
This has been REALLY usefull, but limited : https://www.code57.com/cplusplus-programming-beginners-tutorial-utilities-chrono/
If I understand right, the "value" of 7ms is stored in an "object"...
How can I simply get it out of there and put it in a standard variable so I can divide, multiply and do whatever I want with it?
Here is the interresting part of the C++ code :
#include <iostream>
#include <wiringPi.h>
#include <cstdio>
#include <csignal>
#include <ctime>
#include <chrono>
// global flag used to exit from the main loop
bool RUNNING = true;
bool StartTimer = false;
//int timer = 0;
std::chrono::steady_clock::time_point BeginMeasurement; //chrono variable representing the beginning of the measurement of a motor speed
//some more code in here, but nothing exceptionnal, just calling the interruption when needed
//interruption function for counting the motor speed
void RPMCounter(){
using namespace std;
using namespace std::chrono;
if (StartTimer == true){
StartTimer = false;
steady_clock::duration result = steady_clock::now()-BeginMeasurement;
if (duration_cast<milliseconds>(result).count() < 150){
double freq;
//cout.precision(4);
std::cout << "Time = " << duration_cast<milliseconds>(result).count() << " ms" << '\n';
// I would like the next line to work and give me the frequency of the detection...
freq = 1/(duration_cast<milliseconds>(result).count()/1000);
std::cout << "Frequency = " << freq << " Hz" << '\n';
}
}
else{
BeginMeasurement = steady_clock::now();
StartTimer = true;
}
}
Here is the result in my command prompt :
the value of 7ms increases because I stopped the motor, therefore, it was turning slower until stopping ;)
Edit :
Thanks to Howard Hinnant and Ted Lyngmo, My code now looks like this :
void RPMCounter(){
using namespace std;
using namespace std::chrono;
if (StartTimer == true){
StartTimer = false;
duration<double> result = steady_clock::now() - BeginMeasurement;
if (result < milliseconds{150}){
double freq;//= 1s / result;
//cout.precision(4);
std::cout << "Time = " << duration_cast<milliseconds>(result).count() << " ms" << '\n';
freq = (1.0/(duration<double>{result}.count()/1000))/1000;
std::cout << "Frequency = " << freq << " Hz" << '\n';
}
}
else{
BeginMeasurement = steady_clock::now();
StartTimer = true;
}
}
and it seems to give me a correct frequency.
As i'm a beginner, I'll surely understand all that better in a while and improve it :)
(basically, I'm not exactly sure of what I wrote mean... like the "::" and other ways of :)
The rest of my coding should be more basic and allow me to learn all the tweaks of C++
if (duration_cast<milliseconds>(result).count() < 150){
You can simplify this with:
if (result < 150ms)
Or if you're in C++11:
if (result < milliseconds{150})
The advantage is that you don't have to truncate result to a courser precision, and the code is just easier to read.
freq = 1/(duration_cast<milliseconds>(result).count()/1000);
Instead:
using dsec = duration<double>; // define a double-based second
auto freq = 1/dsec{result}.count();
This could also be written:
auto freq = 1/duration<double>{result}.count();
In any event, this converts result straight to double-based seconds, and inverts that value using floating point arithmetic. The original code uses integral division resulting in an integral result that is always rounding down to 0. I.e. 1/10 == 0, whereas 1/10. == 0.1.
I'd make the result a double based duration:
auto BeginMeasurement = std::chrono::steady_clock::now();
// some work
// a double based duration
std::chrono::duration<double> result = std::chrono::steady_clock::now() - BeginMeasurement;
You can then divide the duration 1s with result to get the frequency:
using namespace std::chrono_literals;
double freq = 1s / result;
std::cout << freq << " Hz\n";
Howard Hinnant pointed out that from C++14 you can make it even easier for youself by changing the dividend from an integer based duration, 1s, to a double based duration, 1.0s, and let result be deduced using auto:
auto result = std::chrono::steady_clock::now() - BeginMeasurement;
double freq = 1.0s / result;
Demo
I am trying this:
std::cout << boost::lexical_cast<std::string>(0.0009) << std::endl;
and expecting the output to be:
0.0009
But the output is:
0.00089999999999999998
g++ version: 5.4.0, Boost version: 1.66
What can I do to make it print what it's been given.
You can in fact override the default precision:
Live On Coliru
#include <boost/lexical_cast.hpp>
#ifdef BOOST_LCAST_NO_COMPILE_TIME_PRECISION
# error unsupported
#endif
template <> struct boost::detail::lcast_precision<double> : std::integral_constant<unsigned, 5> { };
#include <string>
#include <iostream>
int main() {
std::cout << boost::lexical_cast<std::string>(0.0009) << std::endl;
}
Prints
0.0009
However, this is both not supported (detail::) and not flexible (all doubles will come out this way now).
The Real Problem
The problem is loss of accuracy converting from the decimal representation to the binary representation. Instead, use a decimal float representation:
Live On Coliru
#include <boost/lexical_cast.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <string>
#include <iostream>
using Double = boost::multiprecision::cpp_dec_float_50;
int main() {
Double x("0.009"),
y = x*2,
z = x/77;
for (Double v : { x, y, z }) {
std::cout << boost::lexical_cast<std::string>(v) << "\n";
std::cout << v << "\n";
}
}
Prints
0.009
0.009
0.018
0.018
0.000116883
0.000116883
boost::lexical_cast doesn't allow you to specify the precision when converting a floating point number into its string representation. From the documentation
For more involved conversions, such as where precision or formatting need tighter control than is offered by the default behavior of lexical_cast, the conventional std::stringstream approach is recommended.
So you could use stringstream
double d = 0.0009;
std::ostringstream ss;
ss << std::setprecision(4) << d;
std::cout << ss.str() << '\n';
Or another option is to use the boost::format library.
std::string s = (boost::format("%1$.4f") % d).str();
std::cout << s << '\n';
Both will print 0.0009.
0.0009 is a double precision floating literal with, assuming IEEE754, the value
0.00089999999999999997536692664112933925935067236423492431640625
That's what boost::lexical_cast<std::string> sees as the function parameter. And the default precision setting in the cout formatter is rounding to the 17th significant figure:
0.00089999999999999998
Really, if you want exact decimal precision, then use a decimal type (Boost has one), or work in integers and splice in the decimal separator yourself. But in your case, given that you're simply outputting the number with no complex calculations, rounding to the 15th significant figure will have the desired effect: inject
std::setprecision(15)
into the output stream.
This is not a question about template hacks or dealing with compiler quirks. I understand why the Boost libraries are the way they are. This is about the actual algorithm used for the sinc_pi function in the Boost math library.
The function sinc(x) is equivalent to sin(x)/x.
In the documentation for the Boost math library's sinc_pi(), it says "Taylor series are used at the origin to ensure accuracy". This seems nonsensical since division of floating point numbers will not cause any more loss of precision than a multiplication would. Unless there's a bug in a particular implementation of sin, the naive approach of
double sinc(double x) {if(x == 0) return 1; else return sin(x)/x;}
seems like it would be fine.
I've tested this, and the maximum relative difference between the naive version and the one in the Boost math toolkit is only about half the epsilon for the type used, for both float and double, which puts it at the same scale as a discretization error. Furthermore, this maximum difference does not occur near 0, but near the end of the interval where the Boost version uses a partial Taylor series (i.e. abs(x) < epsilon**(1/4)). This makes it look like it is actually the Taylor series approximation which is (very slightly) wrong, either through loss of accuracy near the ends of the interval or through the repeated rounding from multiple operations.
Here are the results of the program I wrote to test this, which iterates through every float between 0 and 1 and calculates the relative difference between the Boost result and the naive one:
Test for type float:
Max deviation from Boost result is 5.96081e-08 relative difference
equals 0.500029 * epsilon
at x = 0.0185723
which is epsilon ** 0.25003
And here is the code for the program. It can be used to perform the same test for any floating-point type, and takes about a minute to run.
#include <cmath>
#include <iostream>
#include "boost/math/special_functions/sinc.hpp"
template <class T>
T sinc_naive(T x) { using namespace std; if (x == 0) return 1; else return sin(x) / x; }
template <class T>
void run_sinc_test()
{
using namespace std;
T eps = std::numeric_limits<T>::epsilon();
T max_rel_err = 0;
T x_at_max_rel_err = 0;
for (T x = 0; x < 1; x = nextafter(static_cast<float>(x), 1.0f))
{
T boost_result = boost::math::sinc_pi(x);
T naive_result = sinc_naive(x);
if (boost_result != naive_result)
{
T rel_err = abs(boost_result - naive_result) / boost_result;
if (rel_err > max_rel_err)
{
max_rel_err = rel_err;
x_at_max_rel_err = x;
}
}
}
cout << "Max deviation from Boost result is " << max_rel_err << " relative difference" << endl;
cout << "equals " << max_rel_err / eps << " * epsilon" << endl;
cout << "at x = " << x_at_max_rel_err << endl;
cout << "which is epsilon ** " << log(x_at_max_rel_err) / log(eps) << endl;
cout << endl;
}
int main()
{
using namespace std;
cout << "Test for type float:" << endl << endl;
run_sinc_test<float>();
cout << endl;
cin.ignore();
}
After some sleuthing, I dug up a discussion from the original authors.
[sin(x)] is well behaved at x=0, and so is sinc(x). […] my solution
will have better performance or small argument, i.e.|x| < pow(x, 1/6),
since most processor need much more time to evaluate sin(x) than
1- (1/6) * x *x.
From https://lists.boost.org/Archives/boost/2001/05/12421.php.
The earliest reference I found to using Taylor expansion to ensure accuracy is from much later, and committed by a different person. So it seems like this is about performance, not accuracy. If you want to make sure, you might want to get in touch with the people involved.
Regarding sinc_pi specifically, I found the following exchange. Note that they use sinc_a to refer to the family of functions of the form sin(x*a)/(x*a).
What is the advatage of sinc_a(x) ? To address rounding problems for very
large x ? Then it would be more important to improve sin(x) for very large
arguments.
The main interest of this particular member of the family is that it requires fewer computations, and that, in itself it
is a special function as it is far more common than its brethren.
From https://lists.boost.org/Archives/boost/2001/05/12485.php.
When I am writing code and using floor on the mathematical expressions which are likely to be almost an integer number, I feel worried about the results. Especially, since numbers are converted in base 2 in the memory, I am always feeling that there is a chance the code gives me (n-1) when I expect n. Is there such mechanism in floor to prevent such numerical errors?
#include <iostream>
#include <cmath>
int main() {
std::cout << std::floor((10 - 8.3) * 10) << std::endl; // 16
std::cout << std::floor(100 - 83) << std::endl; // 17
return 0;
}
// Acknowledgement: thanks to user Ryan
ideone test
I'm displaying a large number of doubles on the console, and I would like to know in advance how many decimal places std::cout will decide to display for a given double. This is basically so I can make it look pretty in the console.
e.g. (pseudo-code)
feild_width = find_maximum_display_precision_that_cout_will_use( whole_set_of_doubles );
...
// Every cout statement:
std::cout << std::setw( feild_width ) << double_from_the_set << std::endl;
I figure cout "guesses"? a good precision to display based on the double. For example, it seems to display
std::cout << sqrt(2) << std::endl;
as 1.41421, but also
std::cout << (sqrt(0.5)*sqrt(0.5) + sqrt(1.5)*sqrt(1.5)) << std::endl;
as 2 (rather than 2.000000000000?????? or 1.99999999?????). Well, maybe this calculates to exactly 2.0, but I don't think that sqrt(2) will calculate to exactly 1.41421, so std::cout has to make some decision about how many decimal places to display at some point, right?
Anyway possible to predict this to formulate a find_maximum_display_precision...() function?
What you need is the fixed iomanip.
http://www.cplusplus.com/reference/iostream/manipulators/fixed/
double d = 10/3;
std::cout << std::setprecision(5) << std::fixed << d << std::endl;
Sometimes C++ I/O bites. Making pretty output is one of those sometimes. The C printf family is easier to control, more understandable, more terse, and isn't plagued with those truly awful ios:: global variables. If you need to use C++ output for other reasons, you can always sprintf/snprintf to a string buffer and then print that using the << to stream operator. IMHO, If you don't need to use C++ output, don't. It is ugly and verbose.
In your question you are mixing precision and width, which are two different things.
Other answers concentrate on precision, but the given precision is the maximum, not a minimum of displayed digits. It does not pad trailing zeros, if not ios::fixed or ios::scientific is set.
Here is a solution to determine the number of characters used for output, including sign and powers of 10:
#include <string>
#include <sstream>
#include <vector>
size_t max_width(const std::vector<double>& v)
{
size_t max = 0;
for (size_t i = 0; i < v.size(); ++i)
{
std::ostringstream out;
// optional: set precision, width, etc. to the same as in std::cout
out << v[i];
size_t length = out.str().size();
if (length > max) max = length;
}
return max;
}
std::cout::precision(); use it to determine precision
example :
# include <iostream>
# include <iomanip>
int main (void)
{
double x = 3.1415927
std::cout << "Pi is " << std::setprecision(4) << x << std::endl;
return 1;
}
This would display:
Pi is 3.142
This link also includes explanation for std::cout::precision();
http://www.cplusplus.com/reference/iostream/ios_base/precision/