Brute-force equation solving - c++

I'm writing a program that uses brute-force to solve an equation. Unfortunately, I seem to have an error in my code somewhere, as my program stops at search = 0.19999. Here is the code:
#include <iostream>
#include <cmath>
#include <vector>
#define min -4.0
#define max 6.5
using namespace std;
double fx (double x){
long double result = cos(2*x)-0.4*x;
double scale = 0.00001;
double value = (int)(result / scale) * scale;
return value;
}
int sign (double a){
if(a<0) return -1;
if(a==0) return 0;
else return 1;
}
int main(){
vector <double> results;
double step, interval, start, end, search;
interval=(fabs(min)+fabs(max))/50;
step=0.00001;
start=min;
end=min+interval;
search=start;
while(end <= max){
if(sign(start) != sign(end)){
search=start;
while(search < end){
if(fx(search) == 0) results.push_back(search);
search=search+step;
}
}
start=end;
end=start + interval;
}
for(int i=0; i<results.size(); i++){
cout << results[i] << endl;
}
}
I've been looking at it for quite some time now and I still can't find the error in the code.
The program should check if there is a root in each given interval and, if yes, check every possibility in that interval. If it finds a root, it should push it into the results vector.

I know you already found the answer but I just spotted a problem while trying to find the bug. On line 37 you make the following comparison:
if(fx(search) == 0)
Since your fx function returns double. It's generally not advisable to test using the equal operator when dealing with double precision float numbers. Your result will probably never be exactly 0, then this test will never return true. I think you should use comparison using a maximum error margin, like this:
double maximum_error = 0.005;
if(abs(fx(search)) < maximum_error)
I think that would do the trick in your case. You may find more information on this link
Even if it's working right now, micro changes in your input numbers, CPU architecture or even compiler flags may break your program. It's highly dangerous to compare doubles in C++ like that, even though it's legal to do so.

I've just made a run through the code again and found the error.
if(sign(start) != sign(end))
was the culprit. There will be a root if the values of f(x) for start and end have different signs. Instead, I wrote that the if the signs of start and end are different, there will be a root. Sorry for the fuss.

Related

How is it possible for some code to take more time to run given the same inputs seemingly just because it's in a loop?

Prelude/Context: I've just started learning c++ and decided to write up some code that would apply a single qubit gate to a quantum register where the register is held in an array called amplitudes and the four elements of the single qubit gate are a,b,c,d. I've tried to write a version that avoids an if statement that appeared in my first pass and to my initial delight, it seemed to have a slight performance enhancement (~10%). If I change the number of qubits in the register or which qubit I target with the gate, I get a similar result. I then tried to make a loop that would perform timing comparisons for a various target qubits and something very strange (to me at least) happened. The alternative function I wrote that avoids the if statement doubled its execution time (from ~0.23 to 0.46 seconds) whereas the function with the if statement had its execution time unaffected (~0.25 seconds). This leads me to my question:
How can code that, when given the same inputs in either case, take longer to execute inside of a loop that iterates those inputs?
For example, if I run a test giving 25 qubits and target qubit 1, the "no if" function wins. Then, if I write a while loop to do a comparison at 25 qubits for each value of target starting at 1, the "no if" function takes double the time to execute even on the first iteration when it receives identical input to the prior case. Interestingly, if I just include the while loop and make it an infinite while loop by putting "True" in the while statement or by commenting out the increment statement target+=1, the function no longer takes double time. This phenomenon requires the loop and the increment from what I can tell.
Code below in case this is a simple coding error in a new language I'm less familiar about. I'm using Visual Studio 2017 community edition with all default settings except that I'm using the "release" build for faster code execution. Commenting out the while statement and the corresponding closing curly brace makes the "no if" timing double.
#include "stdafx.h"
#include <iostream>
#include <time.h>
#include <complex>
void matmulpnoif(std::complex<float> arr[], std::complex<float> out[], int numqbits, std::complex<float> a,
std::complex<float> b, std::complex<float> c, std::complex<float> d, int target)
{
long length = 1 << (numqbits);
long offset = 1 << (target - 1);
long state = 0;
while (state < length)
{
out[state] = arr[state] * a + arr[state + offset] * b;
out[state + offset] = arr[state] * c + arr[state + offset] * d;
state += 1 + offset * (((state%offset) + 1) / offset);
}
}
void matmulpsingle(std::complex<float> arr[], std::complex<float> out[], int numqbits, std::complex<float> a,
std::complex<float> b, std::complex<float> c, std::complex<float> d, int target)
{
long length = 1 << (numqbits);
int shift = target - 1;
long offset = 1 << shift;
for (long state = 0; state < length; ++state)
{
if ((state >> shift) & 1)
{
out[state] = arr[state - offset] * c + arr[state] * d;
}
else
{
out[state] = arr[state] * a + arr[state + offset] * b;
}
}
}
int main()
{
using namespace std;
int numqbits = 25;
long arraylength = 1 << numqbits;
complex<float>* amplitudes = new complex<float>[arraylength];
for (long i = 0; i < arraylength; ++i)
{
amplitudes[i] = complex<float>(0., 0.);
}
amplitudes[0] = complex<float>(1., 0.);
complex<float> a(0., 0.);
complex<float> b(1., 0.);
complex<float> c(0., 0.);
complex<float> d(1., 0.);
int target = 1;
int repititions = 10;
clock_t startTime;
//while (target <= numqbits) {
startTime = clock();
for (int j = 0; j < repititions; ++j) {
complex<float>* outputs = new complex<float>[arraylength];
matmulpsingle(amplitudes, outputs, numqbits, a, b, c, d, target);
delete[] outputs;
}
cout << float(clock() - startTime) / (float)(CLOCKS_PER_SEC*repititions) << " seconds." << endl;
startTime = clock();
for (int k = 0; k < repititions; ++k) {
complex<float>* outputs = new complex<float>[arraylength];
matmulpnoif(amplitudes, outputs, numqbits, a, b, c, d, target);
delete[] outputs;
}
cout << float(clock() - startTime) / (float)(CLOCKS_PER_SEC*repititions) << " seconds." << endl;
target+=1;
//}
delete[] amplitudes;
return 0;
}
Unfortunately, I can not yet post comments, so I'll post this here even though it may not be a complete answer.
In general, the question you pose is difficult. The compiler performs optimisations, and the two cases are different code so they get optimised differently.
On my machine, for instance (Linux, GCC 7.3.1), with only -O3 enabled, the matmulpnoif is always faster (4.8s vs 2.4s or 4.8s vs 4.2s - these times are not measured with clock(), depending on whether the loop is there or not). If I had to guess what happens in this case, the compiler might realise that offset is always one, and optimise the remainder operation away (division is by far the most expensive operation you have in there). However, it could be a combination of other things as well.
Another thing to note, clock() should NOT be used to measure time. It counts the number of clock ticks, for instance, if you parallelise the code across 2 threads the number will be twice the time (assuming your code doesn't wait anywhere - which does not appear to be the case on my machine). If you wish to measure time, I suggest you look at <chrono>, the high_resolution_clock should do the trick.
Another side note, there is no need to keep allocating and deallocating the output array, you can simply use the one, that way you will waste less time. But above all, if you're using C++ I suggest you put all of this in a class, as it is you are passing many parameters to each function, it can make things both difficult to read and slower, if you pass a lot of data (as it gets copied).
And a second note, since you are using bit shifts, it might be safer to use unsigned variables as the right shift >> does not have a strict definition of what it pads with with signed variables. At the very least it's something to keep in mind, it might be padding 1s on that side.

Gaussian integral in C++ is not working. Why?

Here is my code:
const double kEps(0.00000001);
double gaussianIntegral(double x)
{
double res(x), numerator(x);
for(unsigned int i(1), k(3); (abs(numerator/k) > kEps) || (res < 0); ++i, k+=2)
{
numerator*=(-x*x/i);
res+=numerator/k;
}
return res;
}
Here is what I am trying to compute:
When I try to pass 30 as an argument my computations go forever. What is wrong? I am very stuck, it seems for me like there is no error and everything should work just fine, but still it is not so.
Although formally Taylor series converges, in practice you will run into machine precision limits even for the argument as small as 0.1 (and that is using kEps=0)
It's better to use (scaled appropriately) std::erf (C++11), or if it's a homework, look up an algorithm for computing erf function, e.g. here: https://math.stackexchange.com/questions/97/how-to-accurately-calculate-the-error-function-erfx-with-a-computer

How does the cout statement affect the O/P of the code written?

#include <iostream>
#include <iomanip>
#include <math.h>
using namespace std;
int main() {
int t;
double n;
cin>>t;
while(t--)
{
cin>>n;
double x;
for(int i=1;i<=10000;i++)
{
x=n*i;
if(x==ceilf(x))
{
cout<<i<<endl;
break;
}
}
}
return 0;
}
For I/P:
3
5
2.98
3.16
O/P:
1
If my code is:
#include <iostream>
#include <iomanip>
#include <math.h>
using namespace std;
int main() {
int t;
double n;
cin>>t;
while(t--)
{
cin>>n;
double x;
for(int i=1;i<=10000;i++)
{
x=n*i;
cout<<"";//only this statement is added;
if(x==ceilf(x))
{
cout<<i<<endl;
break;
}
}
}
return 0;
}
For the same input O/P is:
1
50
25
The only extra line added in 2nd code is: cout<<"";
Can anyone please help in finding why there is such a difference in output just because of the cout statement added in the 2nd code?
Well this is a veritable Heisenbug. I've tried to strip your code down to a minimal replicating example, and ended up with this (http://ideone.com/mFgs0S):
#include <iostream>
#include <math.h>
using namespace std;
int main()
{
float n;
cin >> n; // this input is needed to reproduce, but the value doesn't matter
n = 2.98; // overwrite the input value
cout << ""; // comment this out => y = z = 149
float x = n * 50; // 149
float y = ceilf(x); // 150
cout << ""; // comment this out => y = z = 150
float z = ceilf(x); // 149
cout << "x:" << x << " y:" << y << " z:" << z << endl;
}
The behaviour of ceilf appears to depend on the particular sequence of iostream operations that occur around it. Unfortunately I don't have the means to debug in any more detail at the moment, but maybe this will help someone else to figure out what's going on. Regardless, it seems almost certain that it's a bug in gcc-4.9.2 and gcc-5.1. (You can check on ideone that you don't get this behaviour in gcc-4.3.2.)
You're probably getting an issue with floating point representations - which is to say that computers cannot perfectly represent all fractions. So while you see 50, the result is probably something closer to 50.00000000001. This is a pretty common problem you'll run across when dealing with doubles and floats.
A common way to deal with it is to define a very small constant (in mathematical terms this is Epsilon, a number which is simply "small enough")
const double EPSILON = 0.000000001;
And then your comparison will change from
if (x==ceilf(x))
to something like
double difference = fabs(x - ceilf(x));
if (difference < EPSILON)
This will smooth out those tiny inaccuracies in your doubles.
"Comparing for equality
Floating point math is not exact. Simple values like 0.2 cannot be precisely represented using binary floating point numbers, and the limited precision of floating point numbers means that slight changes in the order of operations can change the result. Different compilers and CPU architectures store temporary results at different precisions, so results will differ depending on the details of your environment. If you do a calculation and then compare the results against some expected value it is highly unlikely that you will get exactly the result you intended.
In other words, if you do a calculation and then do this comparison:
if (result == expectedResult)
then it is unlikely that the comparison will be true. If the comparison is true then it is probably unstable – tiny changes in the input values, compiler, or CPU may change the result and make the comparison be false."
From http://www.cygnus-software.com/papers/comparingfloats/Comparing%20floating%20point%20numbers.htm
Hope this answers your question.
Also you had a problem with
if(x==ceilf(x))
ceilf() returns a float value and x you have declared as a double.
Refer to problems in floating point comparison as to why that wont work.
change x to float and the program runs fine,
I made a plain try on my laptop and even online compilers.
g++ (4.9.2-10) gave the desired output (3 outputs), along with online compiler at geeksforgeeks.org. However, ideone, codechef did not gave the right output.
All I can infer is that online compilers name their compiler as "C++(gcc)" and give wrong output. While, geeksforgeeks.org, which names the compiler as "C++" runs perfectly, along with g++ (as tested on Linux).
So, we could arrive at a hypothesis that they use gcc to compile C++ code as a method suggested at this link. :)

How to print a double value that is just less than another double value?

Actually I am working on range expression in c++. So what I want is if I have any expression like
x<1
Then my
double getMax(...);
should return a double value that is just before 1.000 (double precision) on a number line.
I tried doing this
double getMax(double& a)
{
return (a-numeric_limits<double>::min());
}
But I am still getting same value as a in return statement.
I think C++ is converting it to nearest double in cout statement.
int main()
{
double a = 32;
cout<<scientific<<getMax(a)<<endl;
return 0;
}
output:
3.200000e+001
First of all, you need to ensure that you actually print sufficiently many digits to ensure all representable values of double are displayed. You can do this as follows (make sure you #include <iomanip> for this):
std::cout << std::scientific << std::setprecision(std::numeric_limits<double>::max_digits10) << getMax(a) << std::endl;
Secondly, numeric_limits<>::min is not appropriate for this. If your starting value is 1.0, you can use numeric_limits<double>::epsilon, which is the smallest difference from 1.0 that is representable.
However, in your code example, the starting value is 32. Epsilon does not necessarily work for that. Calculating the right epsilon in this case is difficult.
However, if you can use C++11(*), there is a function in the cmath header that does what you need std::nextafter:
#include <iostream>
#include <limits>
#include <iomanip>
#include <cmath>
double getMax(double a)
{
return std::nextafter(a,std::numeric_limits<double>::lowest());
}
int main()
{
double a = 32;
std::cout << std::scientific
<< std::setprecision(std::numeric_limits<double>::max_digits10)
<< getMax(a)
<< std::endl;
return 0;
}
I've also put it on liveworkspace.
To explain:
double nextafter(double from, double to);
returns the next representable value of from in the direction of to. So I specified std::numeric_limits<double>::lowest() in my call to ensure you get the next representable value less than the argument.
(*)See Tony D's comment below. You may have access to nextafter() without C++11.
I think you've got the right idea.
Check out Setting the precision of a double without using stream (ios_base::precision) not so much for the question, but for the examples they give of using precision. You might want to try something like printing with a precision of 53.
The way I usually see "close to but not quite" involves setting a difference threshold (typically called epsilon). In that case, you wouldn't use a getMax function, but have an epsilon used in your usage of less than. (You could do a class with the epsilon value and operator overloading. I tend to avoid operator overloading like a plague.)
Basically, you'd need:
bool lessThanEpsilon(double number, double lessThan, double epsilon)
{
return (lessThan - number >= epsilon);
}
There are other varieties, of course. Equals would check if Math.abs(number - equals) < epsilon

Benchmarking math.h square root and Quake square root

Okay so I was board and wondered how fast math.h square root was in comparison to the one with the magic number in it (made famous by Quake but made by SGI).
But this has ended up in a world of hurt for me.
I first tried this on the Mac where the math.h would win hands down every time then on Windows where the magic number always won, but I think this is all down to my own noobness.
Compiling on the Mac with "g++ -o sq_root sq_root_test.cpp" when the program ran it takes about 15 seconds to complete. But compiling in VS2005 on release takes a split second. (in fact I had to compile in debug just to get it to show some numbers)
My poor man's benchmarking? is this really stupid? cos I get 0.01 for math.h and 0 for the Magic number. (it cant be that fast can it?)
I don't know if this matters but the Mac is Intel and the PC is AMD. Is the Mac using hardware for math.h sqroot?
I got the fast square root algorithm from http://en.wikipedia.org/wiki/Fast_inverse_square_root
//sq_root_test.cpp
#include <iostream>
#include <math.h>
#include <ctime>
float invSqrt(float x)
{
union {
float f;
int i;
} tmp;
tmp.f = x;
tmp.i = 0x5f3759df - (tmp.i >> 1);
float y = tmp.f;
return y * (1.5f - 0.5f * x * y * y);
}
int main() {
std::clock_t start;// = std::clock();
std::clock_t end;
float rootMe;
int iterations = 999999999;
// ---
rootMe = 2.0f;
start = std::clock();
std::cout << "Math.h SqRoot: ";
for (int m = 0; m < iterations; m++) {
(float)(1.0/sqrt(rootMe));
rootMe++;
}
end = std::clock();
std::cout << (difftime(end, start)) << std::endl;
// ---
std::cout << "Quake SqRoot: ";
rootMe = 2.0f;
start = std::clock();
for (int q = 0; q < iterations; q++) {
invSqrt(rootMe);
rootMe++;
}
end = std::clock();
std::cout << (difftime(end, start)) << std::endl;
}
There are several problems with your benchmarks. First, your benchmark includes a potentially expensive cast from int to float. If you want to know what a square root costs, you should benchmark square roots, not datatype conversions.
Second, your entire benchmark can be (and is) optimized out by the compiler because it has no observable side effects. You don't use the returned value (or store it in a volatile memory location), so the compiler sees that it can skip the whole thing.
A clue here is that you had to disable optimizations. That means your benchmarking code is broken. Never ever disable optimizations when benchmarking. You want to know which version runs fastest, so you should test it under the conditions it'd actually be used under. If you were to use square roots in performance-sensitive code, you'd enable optimizations, so how it behaves without optimizations is completely irrelevant.
Also, you're not benchmarking the cost of computing a square root, but of the inverse square root.
If you want to know which way of computing the square root is fastest, you have to move the 1.0/... division down to the Quake version. (And since division is a pretty expensive operation, this might make a big difference in your results)
Finally, it might be worth pointing out that Carmacks little trick was designed to be fast on 12 year old computers. Once you fix your benchmark, you'll probably find that it's no longer an optimization, because today's CPU's are much faster at computing "real" square roots.