What is '-1.#IND'? - c++

I have some code that will allow me to draw a crescent moon shape, and have extracted the values to excel to be drawn. However in place of some numbers, there is -1.#IND in place. Firstly if anyone could explain what this means, as Google came back with 0 links. And secondly if there is anyway to stop it from occurring.
There is a brief analogy of my code. I have lots more code besides this, however that is just calculating angles.
for(int j=0; j<=n; j++)//calculation for angles and output displayed to user
{
Xnew2 = -j*(Y+R1)/n; //calculate x coordinate
Ynew2 = Y*(pow(1-(pow((Xnew2/X),2)),0.5));
if(abs(Ynew2) <= R1)
cout<<"\n("<<Xnew2<<", "<<Ynew2<<")"<<endl;
}
MORE INFO
I'm now having the problem with this code.
for(int i=0; i<=n; i++) //calculation for angles and output displayed to user
{
Xnew = -i*(Y+R1)/n; //calculate x coordinate
Ynew = pow((((Y+R1)*(Y+R1)) - (Xnew*Xnew)), 0.5); //calculate y coordinate
AND
for(int j=0; j<=n; j++)//calculation for angles and output displayed to user
{
Xnew2 = -j*(Y+R1)/((n)+((0.00001)*(n==0))); //calculate x coordinate
Ynew2 = Y*(pow(abs(1-(pow((Xnew2/X),2))),0.5));
if(abs(Ynew2) <= R1)
cout<<"\n("<<Xnew2<<", "<<Ynew2<<")"<<endl;
I am having the problem drawing the crescent moon that I cannot get the two circles to have the same starting point? If this makes sense, I am trying to get two parts of circles to draw a crescent moon as such that they have the same start and end points. The only user input I have to work by is the radius and chosen center point.
If anyone has any suggestions on how to do this, it would be great, currently all I am getting more a 'half doughnut' shape, due to the circles not being connected.

#IND means an indetermined form.
What you have there is something known as 'Not a number' or NaN for short.
Quoting from Wikipedia, generation is done by:
Operations with a NaN as at least one operand.
The divisions 0/0 and ±∞/±∞
The multiplications 0×±∞ and ±∞×0
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions
The square root of a negative number.
The logarithm of a negative number
The inverse sine or cosine of a number that is less than −1 or greater than +1.
You're doing at least one of those things.
Edit:
After analyzing your code, these are 2 possibilities:
When n == 0 in the first iteration where j == 0 too, Xnew2 will be -1.#IND
When Xnew2 is greater than X, Ynew2 will be complex -> NaN

You are doing something illegal to a floating point number, such as taking the square root of a negative number. This is presumably on Windows. On Linux, you would get NaN (not a number) or inf. See -1 #IND Question for further information; the link provided in the second answer is helpful.

This from the Wikipedia entry for IEEE 754 Nan:
There are three kinds of operations that can return NaN:
Operations with a NaN as at least one operand.
Indeterminate forms
The divisions 0/0 and ±∞/±∞
The multiplications 0×±∞ and ±∞×0
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions
The standard has alternative functions for powers:
The standard pow function and the integer exponent pown function define 00, 1∞, and ∞0 as 1.
The powr function defines all three indeterminate forms as invalid operations and so returns NaN.
Real operations with complex results, for example:
The square root of a negative number.
The logarithm of a negative number
The inverse sine or cosine of a number that is less than −1 or greater than +1.

You are raising a negative number to the power of a non-inverse (i.e. 1/2, 0.5, 0.25, 0.333333, etc.) which results in a complex number. Like sqrt(-1) aka (-1)^(0.5)
Additionally you could also be equating 0/0 in two of your lines of code.
Use this code instead: (It takes the absolute value of your power's base (preventing negative values, thus preventing imaginary answers (complex numbers = NaN = -1.#IND)) It also prevents you from dividing by 0 if n == 0... in this event it adds 0.00001 to the denominator
for(int j=0; j<=n; j++)//calculation for angles and output displayed to user
{
Xnew2 = -j*(Y+R1)/((n)+((0.00001)*(n==0)); //calculate x coordinate
Ynew2 = Y*(pow(abs(1-(pow((Xnew2/X),2))),0.5));
if(abs(Ynew2) <= R1)
cout<<"\n("<<Xnew2<<", "<<Ynew2<<")"<<endl;
}
{
Xnew3 = -j*(Y+R1)/((n)+((0.00001)*(n==0)); //calculate x coordinate
Ynew3 = Y*(pow(abs(1-(pow((Xnew3/X),2))),0.5)); //calculate y coordinate
if(abs(Ynew3) <= R1)
cout<<"\n("<<Xnew3<<", "<<Ynew3<<")"<<endl; //show x,y coordinates
}
*In the future avoid taking roots of negative numbers (which is the same as raising a negative number to a non-inverse-fraction power), avoid taking a logarithm of a negative number, and avoid dividing by 0 these all produce NaN (-1.#IND)
This code may be better (it uses conditional values to make your power's base zero if it is ever less than zero to prevent imaginary answers):
for(int j=0; j<=n; j++)//calculation for angles and output displayed to user
{
Xnew2 = -j*(Y+R1)/((n)+((0.00001)*(n==0)); //calculate x coordinate
Ynew2 = Y*(pow(((1-(pow((Xnew2/X),2)))*((1-(pow((Xnew2/X),2)))>(0))),0.5));
if(abs(Ynew2) <= R1)
cout<<"\n("<<Xnew2<<", "<<Ynew2<<")"<<endl;
}
{
Xnew3 = -j*(Y+R1)/((n)+((0.00001)*(n==0)); //calculate x coordinate
Ynew3 = Y*(pow(((1-(pow((Xnew3/X),2))))*((1-(pow((Xnew3/X),2))))>(0))),0.5)); //calculate y coordinate
if(abs(Ynew3) <= R1)
cout<<"\n("<<Xnew3<<", "<<Ynew3<<")"<<endl; //show x,y coordinates
}
* I'd also like to point out what "Magtheridon96" mentioned in his answer. The code now makes sure n is not equal to zero otherwise you could be dividing by zero, although I think that would produce #INF not #IND... unless "-j(Y+R1)" is also zero, then it will be 0/0 which will be #IND

Related

Given N lines on a Cartesian plane. How to find the bottommost intersection of lines efficiently?

I have N distinct lines on a cartesian plane. Since slope-intercept form of a line is, y = mx + c, slope and y-intercept of these lines are given. I have to find the y coordinate of the bottommost intersection of any two lines.
I have implemented a O(N^2) solution in C++ which is the brute-force approach and is too slow for N = 10^5. Here is my code:
int main() {
int n;
cin >> n;
vector<pair<int, int>> lines(n);
for (int i = 0; i < n; ++i) {
int slope, y_intercept;
cin >> slope >> y_intercept;
lines[i].first = slope;
lines[i].second = y_intercept;
}
double min_y = 1e9;
for (int i = 0; i < n; ++i) {
for (int j = i + 1; j < n; ++j) {
if (lines[i].first ==
lines[j].first) // since lines are distinct, two lines with same slope will never intersect
continue;
double x = (double) (lines[j].second - lines[i].second) / (lines[i].first - lines[j].first); //x-coordinate of intersection point
double y = lines[i].first * x + lines[i].second; //y-coordinate of intersection point
min_y = min(y, min_y);
}
}
cout << min_y << endl;
}
How to solve this efficiently?
In case you are considering solving this by means of Linear Programming (LP), it could be done efficiently, since the solution which minimizes or maximizes the objective function always lies in the intersection of the constraint equations. I will show you how to model this problem as a maximization LP. Suppose you have N=2 first degree equations to consider:
y = 2x + 3
y = -4x + 7
then you will set up your simplex tableau like this:
x0 x1 x2 x3 b
-2 1 1 0 3
4 1 0 1 7
where row x0 represents the negation of the coefficient of "x" in the original first degree functions, x1 represents the coefficient of "y" which is generally +1, x2 and x3 represent the identity matrix of dimensions N by N (they are the slack variables), and b represents the value of the idepent term. In this case, the constraints are subject to <= operator.
Now, the objective function should be:
x0 x1 x2 x3
1 1 0 0
To solve this LP, you may use the "simplex" algorithm which is generally efficient.
Furthermore, the result will be an array representing the assigned values to each variable. In this scenario the solution is:
x0 x1 x2 x3
0.6666666667 4.3333333333 0.0 0.0
The pair (x0, x1) represents the point which you are looking for, where x0 is its x-coordinate and x1 is it's y-coordinate. There are other different results that you could get, for an example, there could exist no solution, you may find out more at plenty of books such as "Linear Programming and Extensions" by George Dantzig.
Keep in mind that the simplex algorithm only works for positive values of X0, x1, ..., xn. This means that before applying the simplex, you must make sure the optimum point which you are looking for is not outside of the feasible region.
EDIT 2:
I believe making the problem feasible could be done easily in O(N) by shifting the original functions into a new position by means of adding a big factor to the independent terms of each function. Check my comment below. (EDIT 3: this implies it won't work for every possible scenario, though it's quite easy to implement. If you want an exact answer for any possible scenario, check the following explanation on how to convert the infeasible quadrants into the feasible back and forth)
EDIT 3:
A better method to address this problem, one that is capable of precisely inferring the minimum point even if it is in the negative side of either x or y: converting to quadrant 1 all of the other 3.
Consider the following generic first degree function template:
f(x) = mx + k
Consider the following generic cartesian plane point template:
p = (p0, p1)
Converting a function and a point from y-negative quadrants to y-positive:
y_negative_to_y_positive( f(x) ) = -mx - k
y_negative_to_y_positive( p ) = (p0, -p1)
Converting a function and a point from x-negative quadrants to x-positive:
x_negative_to_x_positive( f(x) ) = -mx + k
x_negative_to_x_positive( p ) = (-p0, p1)
Summarizing:
quadrant sign of corresponding (x, y) converting f(x) or p to Q1
Quadrant 1 (+, +) f(x)
Quadrant 2 (-, +) x_negative_to_x_positive( f(x) )
Quadrant 3 (-, -) y_negative_to_y_positive( x_negative_to_x_positive( f(x) ) )
Quadrant 4 (+, -) y_negative_to_y_positive( f(x) )
Now convert the functions from quadrants 2, 3 and 4 into quadrant 1. Run simplex 4 times, one based on the original quadrant 1 and the other 3 times based on the converted quadrants 2, 3 and 4. For the cases originating from a y-negative quadrant, you will need to model your simplex as a minimization instance, with negative slack variables, which will turn your constraints to the >= format. I will leave to you the details on how to model the same problem based on a minimization task.
Once you have the results of each quadrant, you will have at hands at most 4 points (because you might find out, for example, that there is no point on a specific quadrant). Convert each of them back to their original quadrant, going back in an analogous manner as the original conversion.
Now you may freely compare the 4 points with each other and decide which one is the one you need.
EDIT 1:
Note that you may have the quantity N of first degree functions as huge as you wish.
Other methods for solving this problem could be better.
EDIT 3: Check out the complexity of simplex. In the average case scenario, it works efficiently.
Cheers!

c++ half library has lower precision for positive numbers

I know that I am using a capability not built into c++ however, this library seems to be so commonly used that I am surprised to see this error pop up.
For those of you who do not know about the library it can be found here. Essentially, it is supposed to allow the support of 16 bit floating point (lower precision) numbers.
My problem is that the precision of half floats appears to diminish for positive numbers.
In this code, I am generating a bunch of points to be rendered to the screen. {xs1, ys1} represents floating point precision calculation of sigmoid. {xs3, ys3} represents the values cast into floating point precision.
vector<float> xs1, ys1, xs3, ys3;
int res = 200000;
for (int i = 0; i < res; i++)
{
float prec = float(i) / float(res);
float fx = ((perc - 0.5) * 2.0)*8.0;
half hx = half(fx);
float fy = MFunctions::sigmoid(fx);
half hy = half(fy);
xs1.push_back(fx);
ys1.push_back(fy);
xs3.push_back(float(hx));
ys3.push_back(float(hy));
}
Here are the results (looking at zoomed in portions of the graph this generates with a window width of 2.2 and a window height of 0.02 units):
When looking at the floating precision graph, {xs1, ys1} both of the corners of the sigmoid function are smooth:
However, when looking at the half precision graph {xs3, ys3} the corner in the positive x axis shows a stepping effect while the corner in the negative x axis shows a lower resolution but smooth graph:
I am not sure why this is happening since the only difference between positive and negative numbers should be a sign bit.
Is there something wrong that I am doing or is this a flaw in the half library?
Sigmoid function output values are [0;1], so what you see is normal: in the bottom picture, values are around 1, so precision is much lower than around 0.

Evenly distribute grid cells horizontally/vertically?

I'm trying to draw a grid inside a window of game_width=640 and game_height=480. The numbers of grid cells is predefined. I want to evenly distribute the cells horizontally and vertically.
void GamePaint(HDC dc)
{
int numcells = 11;
for(int i = 1; i <= numcells; i++)
{
int y = <INSERT EQUATION>;
MoveToEx(dc, 0, y, NULL);
LineTo(dc, game_width, y);
}
// solving the horizontal equation will in turn solve the vertical one so no need to show the vertical code
}
The first equation came to mind was:
i * (game_height / numcells)
The notion behind this is to divide the total height by the number of cells to get the even cell size, which is then multiplied by i in each iteration of the loop to get the correct y coordinate of the beginning of the horizontal line.
The problem with that is that it seems to leave an extra small cell at the end:
I figured this must have something to do with that integral division, so we come to the second equation:
(int)(i * ((float)game_height / numcells))
The idea is to avoid the integer division, do a float division, multiply by i like before and cast the result back to int. This works well, no extra small cell at the end!
What's driving me nuts, is this equation:
i * game_height / numcells
which seems to have the same effect as the previous equation, but of course with the added benefit of not doing any casting. I can't figure out why this doesn't suffer from the integer division issues in the first equation.
Note that mathematically: X * (Y / Z) == X * Y / Z
So there's definitely a problem with integer division in the first equation.
Here's a video watching these 3 equations in the debugger watch window.
You can see an increasing gap between the result of the first equation vs the second and third (which always had the same result) as i increases.
Why is the 3rd equation giving correct results as the second equation and not suffering from the integral division error in the first equation? I just can't seem to wrap my head around it...
Any help is appreciated.
The answer is in the order of operations performed. The first equation
int y = i * (game_height / numcells);
performs the division first, and magnifies the rounding error by multiplying with i. The further you move down/to the right, the larger the accumulated rounding error becomes.
The last equation
int y = i * game_height / numcells;
is evaluated left to right. The relative error gets smaller, as you are dividing larger numbers (i * game_height). You still have a rounding error, but it doesn't accumulate. The fact, that you do not wind up with extra space at the final evaluation is, that i is equal to numcells, essentially cancelling each other out. You will always see y == game_height in the final iteration.
Using floating point operations is still more accurate: While the rounding error using integer math is in the interval [0 .. numcells) for each line, floating point math reduces that to [0 .. 1). You will see more evenly distributed lines using floating point math.
Note: On Windows you can use MulDiv instead of the integer equation, to prevent common errors such as transient overflows.

In c++, how do I create point locations arbitrarily?

I'm trying to program a simulation. Originally I'd randomly create points like so...
for (int c = 0; c < number; c++){
for(int d = 0; d < 3; d++){
coordinate[c][d] = randomrange(low, high);
}
}
Where randomrange() is an arbitrary range randomizer, number is the amount of created points, and d represents the x,y,z coordinate. It works, however I want to take things further. How would I define a known shape? Say I want 80 points on a circle's circumference, or 500 that form the edges of a cube. I can explain well on paper, but have a problem describing the process as coding. This doesn't pertain to the question, but I end up taking the points to txt file and then use matlab, scatter3 to plot the points. Creating the "shape" points is my issue.
Both a circle and a cube edges set are 1-dimensional sets, so you can represent them as real intervals. For a circle it's straightforward: use an interval (0, 2pi) and transform a random value phi from the interval into a point:
xcentre + R cos(phi), ycentre + R sin(phi)
For a cube you have 12 segments, so use interval (0, 12) and split a random number from the interval into an integer part and a fraction. Then use the integer as an edge number and the fraction as a position inside the edge.
Easy variant:
First think of the min/max x/y values (separately; to reduce the faulty values for the step below), generate some coordinates matching this range, and then check if it fulfills eg. a^2+b^2=r^2 (circle)
If not, try again.
Better, but only possible for certain shapes:
Generate a radius between (0-max) and an angle (0-360)
(or just an angle if it should be on the circle border)
and use some math (sin/cos...) to transform it into x and y.
http://en.wikipedia.org/wiki/Polar_coordinate_system

The result of own double precision cos() implemention in a shader is NaN, but works well on the CPU. What is going wrong?

as i said, i want implement my own double precision cos() function in a compute shader with GLSL, because there is just a built-in version for float.
This is my code:
double faculty[41];//values are calculated at the beginning of main()
double myCOS(double x)
{
double sum,tempExp,sign;
sum = 1.0;
tempExp = 1.0;
sign = -1.0;
for(int i = 1; i <= 30; i++)
{
tempExp *= x;
if(i % 2 == 0){
sum = sum + (sign * (tempExp / faculty[i]));
sign *= -1.0;
}
}
return sum;
}
The result of this code is, that the sum turns out to be NaN on the shader, but on the CPU the algorithm is working well.
I tried to debug this code too and I got the following information:
faculty[i] is positive and not zero for all entries
tempExp is positive in each step
none of the other variables are NaN during each step
the first time sum is NaN is at the step with i=4
and now my question: What exactly can go wrong if each variable is a number and nothing is divided by zero especially when the algorithm works on the CPU?
Let me guess:
First you determined the problem is in the loop, and you use only the following operations: +, *, /.
The rules for generating NaN from these operations are:
The divisions 0/0 and ±∞/±∞
The multiplications 0×±∞ and ±∞×0
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions
You ruled out the possibility for 0/0 and ±∞/±∞ by stating that faculty[] is correctly initialized.
The variable sign is always 1.0 or -1.0 so it cannot generate the NaN through the * operation.
What remains is the + operation if tempExp ever become ±∞.
So probably tempExp is too high on entry of your function and becomes ±∞, this will make sum to be ±∞ too. At the next iteration you will trigger the NaN generating operation through: ∞ + (−∞). This is because you multiply one side of the addition by sign and sign switches between positive and negative at each iteration.
You're trying to approximate cos(x) around 0.0. So you should use the properties of the cos() function to reduce your input value to a value near 0.0. Ideally in the range [0, pi/4]. For instance, remove multiples of 2*pi, and get the values of cos() in [pi/4, pi/2] by computing sin(x) around 0.0 and so on.
What can go dramatically wrong is a loss of precision. cos(x) usually is implemented by range reduction followed by a dedicated implementation for the range [0, pi/2]. Range reduction uses cos(x+2*pi) = cos(x). But this range reduction isn't perfect. For starters, pi cannot be exactly represented in finite math.
Now what happens if you try something as absurd as cos(1<<30) ? It's quite possible that the range reduction algorithm introduces an error in x that's larger than 2*pi, in which case the outcome is meaningless. Returning NaN in such cases is reasonable.