How does 6-6.0 stored as a single integer = a negative 8.
Related
I am building a neural network and using xtensor for array multiplication in feed forward. The network takes in xt::xarray<double> and outputs a decimal number between 0 and 1. I have been given a sheet for expected output. when i compare my output with the provided sheet, I found that all the results differ after exactly 7 digits. for example if the required value is 0.1234567890123456, I am getting values like 0.1234567-garbage-numbers-so-that-total-numbers-equal-16, 0.1234567993344660, 0.1234567221155667.
I know I can not get that exact number 0.1234567890123456 due to floating point math. But how can I debug/ increase precision to be close to that required number. thanks
Update:
xt::xarray<double> Layer::call(xt::xarray<double> input)
{
return xt::linalg::dot(input, this->weight) + this->bias;
}
for code I am simply calling this call method a bunch of times where weight and bias are xt::xarray<double> arrays.
I'm using odoo 8 and i imported an excel file which i introduced float values with 3 digits after the decimal point. After importing the data in odoo 8 as a tree view i noticed that odoo only displays 2 digits after the comma for these digits and it does the rounding however i want to keep the values as is in the excel file (without rounding and with 3 digits after the decimal point). Any idea for help please ?
You need to change the decimal precision using digits.
digits=(6, 2) specifies the precision of a float number: 6 is the total number of digits, while 2 is the number of digits after the comma. Note that it results in the number digits before the comma is a maximum 4
Im trying to get the current opengl verson.
glGetString(GL_VERSION) returns
"4.6.0 NVIDIA 391.01"
std::string strVersion = (const char*)glGetString(GL_VERSION);
strVersion = strVersion.substr(0, strVersion.find(" "));
float number = std::atof(strVersion.c_str());
float number = 4.59999990
why is the float not 4.6.0?
Why you don't get the third number
std::atof will take as many characters as it can that represent a decimal number. That's 4.6. The next dot cannot be part of the number, because there is no such thing as a decimal number with two dots. Decimal numbers only have one dot, separating the integer and the fractional parts.
Why you get 4.59999990 instead of 4.6
Because floating point numbers cannot store any possible combination of integer and fractional part. They have limited space to store information, so they always are just approximations. See is floating point math broken?.
How to get the version
A version is not a number. That version consists of three numbers, not one: 4, 6 and 0. They are integers, not decimal numbers. So you need to either just handle the version as a string:
if (strVersion == "4.6.0")
or you have to split it into three parts and get those integer values separately. See Splitting a C++ std::string using tokens for how to do that.
I have problems with viewing strings in my debug VS2012 project. For some reason, either the string is not readable at all - VS reports '<error string not readable>' - or the first four characters of the strings are always cut. Even so on constant strings. What could cause this?
Recently, sometimes (rarely) when we export data from our application, the export log contains float values that look like "-1.#J". I haven't been able to reproduce it so I don't know what the float looks like in binary, or how Visual Studio displays it.
I tried looking at the source code for printf, but didn't find anything (not 100% sure I looked at the right version though...).
I've tried googling but google throws away any #, it seems. And I can't find any lists of float errors.
It can be either negative infinity or NaN (not a number). Due to the formatting on the field printf does not differentiate between them.
I tried the following code in Visual Studio 2008:
double a = 0.0;
printf("%.3g\n", 1.0 / a); // +inf
printf("%.3g\n", -1.0 / a); // -inf
printf("%.3g\n", a / a); // NaN
which results in the following output:
1.#J
-1.#J
-1.#J
removing the .3 formatting specifier gives:
1.#INF
-1.#INF
-1.#IND
so it's clear 0/0 gives NaN and -1/0 gives negative infinity (NaN, -inf and +inf are the only "erroneous" floating point numbers, if I recall correctly)