Distinguish between Integer and Double in V8 - c++

In my implementation I provide a function to JavaScript that accepts a parameter.
v8::Handle<v8::Value> TableGetValueIdForValue(const v8::Arguments& args) {
v8::Isolate* isolate = v8::Isolate::GetCurrent();
v8::HandleScope handle_scope(isolate);
auto val = args[1];
if (val->IsNumber()) {
auto num = val->ToNumber();
// How to check if Int or Double
} else {
// val == string
}
}
Now this parameter can have basically any type. As I support Int, Float and String I want to efficiently check for these types. Using IsNumber() and IsStringObject() I can make sure that the objects are numberish or a string.
But now I need to differentiate between an integer value and a float. What is the best way to perform this test? Is there a way to call / use the typeof function exposed to JS?

Quick Answer
v8::Value::NumberValue() will will return the value of the javascript Number without loss of precision.
Explanation
It is true that the set of numbers representable by int64_t and double is different. And so is natural to be concerned about what happens if the value is actually int64_t because v8::Value defines both
V8EXPORT int64_t v8::Value::IntegerValue() const;
V8EXPORT double v8::Value::NumberValue() const;
What is a v8::Number?
Consider v8::Number doc
Detailed Description
A JavaScript number value (ECMA-262, 4.3.20)
IntegerValue does return an int64_t, but there will be no more precision available, because the value is stored internally as a double-precision 64-bit binary format IEEE 754 value.
Testing in a browser
Checking if javascript can represent a value that a double can't but an int64_t can.
2^63 - 1 is equal to 9223372036854775807
Try typing the following in a javascript console; this value is parsed but the extra precision is thrown away because double can't represent it.
>9223372036854775807
the result
9223372036854776000

Try IsInt32 or IsUint32() to check the number is integer or not.
https://github.com/v8/v8/blob/master/include/v8.h#L1313

Try using this line;
bool isInt = ( num->NumberValue() ) % 1 == 0;
NumberValue returns a double with the number's value, and the % 1 == 0 will return true if the value returned is evenly divisible by 1.

Related

float number to string converting implementation in STD

I faced with a curious issue. Look at this simple code:
int main(int argc, char **argv) {
char buf[1000];
snprintf_l(buf, sizeof(buf), _LIBCPP_GET_C_LOCALE, "%.17f", 0.123e30f);
std::cout << "WTF?: " << buf << std::endl;
}
The output looks quire wired:
123000004117574256822262431744.00000000000000000
My question is how it's implemented? Can someone show me the original code? I did not find it. Or maybe it's too complicated for me.
I've tried to reimplement the same transformation double to string with Java code but was failed. Even when I tried to get exponent and fraction parts separately and summarize fractions in cycle I always get zeros instead of these numbers "...822262431744". When I tried to continue summarizing fractions after the 23 bits (for float number) I faced with other issue - how many fractions I need to collect? Why the original code stops on left part and does not continue until the scale is end?
So, I really do not understand the basic logic, how it implemented. I've tried to define really big numbers (e.g. 0.123e127f). And it generates huge number in decimal format. The number has much higher precision than float can be. Looks like this is an issue, because the string representation contains something which float number cannot.
Please read documentation:
printf, fprintf, sprintf, snprintf, printf_s, fprintf_s, sprintf_s, snprintf_s - cppreference.com
The format string consists of ordinary multibyte characters (except %), which are copied unchanged into the output stream, and conversion specifications. Each conversion specification has the following format:
introductory % character
...
(optional) . followed by integer number or *, or neither that specifies precision of the conversion. In the case when * is used, the precision is specified by an additional argument of type int, which appears before the argument to be converted, but after the argument supplying minimum field width if one is supplied. If the value of this argument is negative, it is ignored. If neither a number nor * is used, the precision is taken as zero. See the table below for exact effects of precision.
....
Conversion Specifier
Explanation
Expected Argument Type
f F
converts floating-point number to the decimal notation in the style [-]ddd.ddd. Precision specifies the exact number of digits to appear after the decimal point character. The default precision is 6. In the alternative implementation decimal point character is written even if no digits follow it. For infinity and not-a-number conversion style see notes.
double
So with f you forced form ddd.ddd (no exponent) and with .17 you have forced to show 17 digits after decimal separator. With such big value printed outcome looks that odd.
Finally I've found out what the difference between Java float -> decimal -> string convertation and c++ float -> string (decimal) convertation. I did not find the original source code, but I replicated the same code in Java to make it clear. I think the code explains everything:
// the context size might be calculated properly by getting maximum
// float number (including exponent value) - its 40 + scale, 17 for me
MathContext context = new MathContext(57, RoundingMode.HALF_UP);
BigDecimal divisor = BigDecimal.valueOf(2);
int tmp = Float.floatToRawIntBits(1.23e30f)
boolean sign = tmp < 0;
tmp <<= 1;
// there might be NaN value, this code does not support it
int exponent = (tmp >>> 24) - 127;
tmp <<= 8;
int mask = 1 << 23;
int fraction = mask | (tmp >>> 9);
// at this line we have all parts of the float: sign, exponent and fractions. Let's build mantissa
BigDecimal mantissa = BigDecimal.ZERO;
for (int i = 0; i < 24; i ++) {
if ((fraction & mask) == mask) {
// i'm not sure about speed, maybe division at each iteration might be faster than pow
mantissa = mantissa.add(divisor.pow(-i, context));
}
mask >>>= 1;
}
// it was the core line where I was losing accuracy, because of context
BigDecimal decimal = mantissa.multiply(divisor.pow(exponent, context), context);
String str = decimal.setScale(17, RoundingMode.HALF_UP).toPlainString();
// add minus manually, because java lost it if after the scale value become 0, C++ version of code doesn't do it
if (sign) {
str = "-" + str;
}
return str;
Maybe topic is useless. Who really need to have the same implementation like C++ has? But at least this code keeps all precision for float number comparing to the most popular way converting float to decimal string:
return BigDecimal.valueOf(1.23e30f).setScale(17, RoundingMode.HALF_UP).toPlainString();
The C++ implementation you are using uses the IEEE-754 binary32 format for float. In this format, the closet representable value to 0.123•1030 is 123,000,004,117,574,256,822,262,431,744, which is represented in the binary32 format as +13,023,132•273. So 0.123e30f in the source code yields the number 123,000,004,117,574,256,822,262,431,744. (Because the number is represented as +13,023,132•273, we know its value is that exactly, which is 123,000,004,117,574,256,822,262,431,744, even though the digits “123000004117574256822262431744” are not stored directly.)
Then, when you format it with %.17f, your C++ implementation prints the exact value faithfully, yielding “123000004117574256822262431744.00000000000000000”. This accuracy is not required by the C++ standard, and some C++ implementations will not do the conversion exactly.
The Java specification also does not require formatting of floating-point values to be exact, at least in some formatting operations. (I am going from memory and some supposition here; I do not have a citation at hand.) It allows, perhaps even requires, that only a certain number of correct digits be produced, after which zeros are used if needed for positioning relative to the decimal point or for the requested format.
The number has much higher precision than float can be.
For any value represented in the float format, that value has infinite precision. The number +13,023,132•273 is exactly +13,023,132•273, which is exactly 123,000,004,117,574,256,822,262,431,744, to infinite precision. The precision the format has for representing numbers affects only which numbers it can represent, not how precisely it represents the numbers that it does represent.

machine epsilon - long double in c++

I wanted to calculate the machine Epsilon, the smallest possible number e that gives 1 + e > 1 using different data types of C++: float, double and long double.
Here's my code:
#include <cstdio>
template<typename T>
T machineeps() {
T epsilon = 1;
T expression;
do {
epsilon = epsilon / 2;
expression = 1 + epsilon;
} while(expression > 1);
return epsilon;
}
int main() {
auto epsf = machineeps<float>();
auto epsd = machineeps<double>();
auto epsld = machineeps<long double>();
std::printf("epsilon float: %22.17e\nepsilon double: %22.17e\nepsilon long double: %Le\n", epsf, epsd, epsld);
return 0;
}
But I get this strange output:
epsilon float: 5.96046447753906250e-008
epsilon double: 1.11022302462515650e-016
epsilon long double: -0.000000e+000
The values for float and double are what I was expecting, but, I cannot explain the long double behavior.
Can somebody tell me what went wrong?
I cannot reproduce your results. I get:
epsilon long double: 5.421011e-20
Anyway, logically, the code should be something like:
template<typename T>
T machineeps() {
T epsilon = 1, prev;
T expression;
do {
prev = epsilon;
epsilon = epsilon / 2;
expression = 1 + epsilon;
} while (expression > 1);
return prev; // <-- `1+prev` yields a result different from one
}
On my platform it produces values similar to std::numeric_limits::epsilon:
epsilon float: 1.19209289550781250e-07
epsilon double: 2.22044604925031308e-16
epsilon long double: 1.084202e-19
(note the different order of magnitude)
There are several things going on here.
First, floating-point math is often done at the maximum available precision, regardless of the actual declared type of the floating-point variable. So, for example, arithmetic on floats is usually done with 80 bits of precision on Intel hardware (Java originally banned this, requiring all floating-point math to be done at the exact precision of the type; this killed floating-point performance, and they quickly abandoned that rule). Storing the result of a floating-point calculation is supposed to truncate the value to the appropriate type, but by default most compilers ignore this. You can tell your compiler not to allow that; the switch for that depends on the compiler. As is, you can’t rely on the result that’s being calculated here.
Second, the loop in the code terminates when the value of 1 + epsilon is not greater than 1, so the returned value will be less than the true value of epsilon.
Third, coupled with the second one, some floating-point implementations don’t do subnormal values; once the exponent becomes smaller than the smallest that can be represented, the value is 0. That may be what you’re seeing here with the long double value. IEEE floating-point handles zeros less abruptly — once you hit that minimum exponent, smaller values gradually lose precision. There are quite a few values between the smallest normalized value and 0.

Check if a passed double argument is "close enough" to be considered integral

I am working on a code where I need to check if a certain variable that can take a double value has actually taken on an integer value. I consider a double variable to have taken on an integer value if it is within a tolerance of an integer. This tolerance is 1e-5.
The following is my code:
#define SMALL 1e-5
//Double that attains this is considered non zero. Strictly Less than this is 0
int check_if_integer(double arg){
//returns 1 if arg is close enough to an integer
//returns 0 otherwise
if(arg - (int)arg >= SMALL){
if(arg + SMALL > (int)(arg+1.0)){
return(1);
//Code should have reached this point since
//arg + SMALL is 16.00001
//while (int)(arg+1.0) should be 16
//But the code seems to evaluate (int)(arg+1.0) to be 17
}
}
else{
return(1);
}
return(0);
}
int main(void){
int a = check_if_integer(15.999999999999998);
}
Unfortunately, on passing the argument 15.999999999999998, the function returns a 0. That is, it deems the argument to be fractional, while it should have returned a 1 indicating that the argument is "close enough" to 16.
I am using VS2010 professional.
Any pointers will be greatly appreciated!
Further to hvd's answer regarding types; it is also inadvisable to add/subtract small doubles to/from large doubles due to the way in which they are internally represented.
A simple work around which avoids both issues would be:
if (abs(arg - round(arg)) <= SMALL) {
return (1);
} else {
return (0);
}
Yes, floating point is hard. Just because 15.999999999999998 < 16.0, that doesn't mean 15.999999999999998 + 1.0 < 17.0. Suppose you have a decimal floating-point type with three digits of precision. What result do you get for 9.99 + 1.0 in that type's precision? The mathematical result would be 10.99, and rounded to that type's precision gives 11.0. Binary floating-point has the same issue.
You can, in this particular case, change (int)(arg+1.0) to (int)arg+1. (int)arg is accurate, and so is integer addition.

spliting 64 bit value to fit in argument type of double

I have a function for which i cannot change the syntax, say this is some library function that i am calling:
void schedule(double _val);
void caller() {
uint64_t value = 0xFFFFFFFFFFFFFFF;
schedule(value);
}
as the function schedule accepts double as the argument type, in cases where the value of the argument is greater that 52 bits ( considering double stores mantissa as 52 bit value) i loose the precision in such cases.
what i intend to do is , if the value if greater than the max value a double can hold, i need to loop for the remaining value, so that in the end it sums up to correct value.
void caller() {
uint64_t value = 0xFFFFFFFFFFFFFFF;
for(count = 0; count < X ; count++) {
schedule(Y);
}
}
i need to extract X and Y from variable 'value'.
How can this be achieved ?
My objective is not to loose the precision because of the type casting.
If your problem is only losing precision in caller and not in schedule, then no loop is needed:
void caller() {
uint64_t value = 0xFFFFFFFFFFFFFFF;
uint64_t modulus = (uint64_t) 1 << 53;
schedule(value - value % modulus);
schedule(value % modulus)
}
In value - value % modulus, only the high 11 bits are significant, because the low 53 have been cleared. So, when it is converted to double, there is no error, and the exact value is passed to schedule. Similarly, value % modulus has only 53 bits and is converted to double exactly.
(The encoding of the significand of an IEEE-754 64-bit binary floating-point object has 52 bits, but the actual significand has 53 bits, due to the implicit leading bit.)
Note: The above may result in schedule being called with an argument of zero, which we have not established is permitted. If that is a problem, such a call should be skipped.
If N is the max integral value your double can represent precisely, then obviously you can use
Y = N
and
X = amount / Y
(assuming integral division). Once you finished iterating over X you still have to schedule the remainder
R = amount % Y
Just keep in mind that all integral calculations have to be performed within the domain of uint64_t type, i.e. you have to add proper suffix to the constants (UL or ULL), or use type casts to uint64_t or use intermediate variables of type uint64_t.
Of course, if your program doesn't really care how many times schedule is called as long as the total is correct, then you can use virtually any value for N, as long as it can be represented precisely. For example, you can simply set N = 10000.
On the other hand, if you want to minimize the number of schedule calls, then it be worth noting that due to "implicit 1" rule the max integer that can be represented precisely in 52 bit mantissa is (1 << 53) - 1.

Heuristic to identify if a series of 4 bytes chunks of data are integers or floats

What's the best heuristic I can use to identify whether a chunk of X 4-bytes are integers or floats? A human can do this easily, but I wanted to do it programmatically.
I realize that since every combination of bits will result in a valid integer and (almost?) all of them will also result in a valid float, there is no way to know for sure. But I still would like to identify the most likely candidate (which will virtually always be correct; or at least, a human can do it).
For example, let's take a series of 4-bytes raw data and print them as integers first and then as floats:
1 1.4013e-45
10 1.4013e-44
44 6.16571e-44
5000 7.00649e-42
1024 1.43493e-42
0 0
0 0
-5 -nan
11 1.54143e-44
Obviously they will be integers.
Now, another example:
1065353216 1
1084227584 5
1085276160 5.5
1068149391 1.33333
1083179008 4.5
1120403456 100
0 0
-1110651699 -0.1
1195593728 50000
These will obviously be floats.
PS: I'm using C++ but you can answer in any language, pseudo code or just in english.
The "common sense" heuristic from your example seems to basically amount to a range check. If one interpretation is very large (or a tiny fraction, close to zero), that is probably wrong. Check the exponent of the float interpretation and compare it to the exponent that results from a proper static cast of the integer interpretation to a float.
Looks like a kolmogorov complexity issue. Basically, from what you show as example, the shorter number (when printed as string to be read by a human), be it integer or float, is the right answer for your heuristic.
Also, obviously if the value is an incorrect float, it is an integer :-)
Seems direct enough to implement.
You can probably "detect" it by looking at the high bits, with floats they'd generally be non-zero, with integers, they would be unless you're dealing with a very large number. So... you could try and see if (2^30) & number returns 0 or not.
If both numbers are positive, your floats are reasonably large (greater than 10^-42), and your ints are reasonably small (less than 8*10^6), then the check is pretty simple. Treat the data as a float and compare to the least normalized float.
union float_or_int {
float f;
int32_t i;
};
bool is_positive_normalized_float( float_or_int &u ) {
return u.f >= numeric_limits<float>::min();
}
This assumes IEEE float and same endinanness between the CPU and the FPU.
A human can do this easily
A human can't do it at all. Ergo neither can a computer. There are 2^32 valid int values. A large number of them are also valid float values. There is no way of distinguishing the intent of the data other than by tagging it or by not getting into such a mess in the first place.
Don't attempt this.
You are going to be looking at the upper 8 or 9 bits. That's where the sign and mantissa of a floating point value are. Values of 0x00 0x80 and 0xFF here are pretty uncommon for valid float data.
In particular if the upper 9 bits are all 0 then this likely to be a valid floating point value only if all 32 bits are 0. Another way to say this is that if the exponent is 0, the mantissa should also be zero. If the upper bit is 1 and the next 8 bits are 0, this is legal, but also not likely to be valid. It represents -0.0 which is a legal floating point value, but a meaningless one.
To put this into numerical terms. if the upper byte is 0x00 (or 0x80), then the value has a magnitude of at most 2.35e-38. Plank's constant is 6.62e-34 m2kg/s that's 4 orders of magnitude larger. The estimated diameter of a proton is much much larger than that (estimated at 1.6e−15 meters). The smallest non-zero value for audio data is about 2.3e-10. You aren't likely to see floating point values are are legitimate measurements of anything real that are smaller than 2.35e-38 but not zero.
Going the other direction if the upper byte is 0xFF then this value is either Infinite, a NaN or larger in magnitude than 3.4e+38. The age of the universe is estimated to be 1.3e+10 years (1.3e+25 femtoseconds). The observable universe has roughly e+23 stars, Avagadro's number is 6.02e+23. Once again float values larger than e+38 rarely show up in legitimate measurements.
This is not to say that the FPU can't load or produce such values, and you will certainly see them in intermediate values of calculations if you are working with modern FPUs. A modern FPU will load a floating point value that has a exponent of 0 but the other bits are not 0. These are called denormalized values. This is why you are seeing small positive integers show up as float values in the range of e-42 even though the normal range of a float only goes down to e-38
An exponent of all 1s represents Infinity. You probably won't find infinities in your data, but you would know better than I. -Infinity is 0xFF800000, +Infinity is 0x7F800000, any value other than 0 in the mantissa of Infinity is malformed. malformed infinities are used as NaNs.
Loading a NaN into a float register can cause it to throw an exception, so you want to use integer math to do your guessing about whether your data is float or int until you are fairly certain it is int.
If you know that your floats are all going to be actual values (no NaNs, INFs, denormals or other aberrant values) then you can use this a criterion. In general an array of ints will have a high probability of containing "bad" float values.
I assume the following:
that you mean IEEE 754 single precision floating point numbers.
that the sign bit of the float is saved in the MSB of an int.
So here we go:
static boolean probablyFloat(uint32_t bits) {
bool sign = (bits & 0x80000000U) != 0;
int exp = ((bits & 0x7f800000U) >> 23) - 127;
uint32_t mant = bits & 0x007fffff;
// +- 0.0
if (exp == -127 && mant == 0)
return true;
// +- 1 billionth to 1 billion
if (-30 <= exp && exp <= 30)
return true;
// some value with only a few binary digits
if ((mant & 0x0000ffff) == 0)
return true;
return false;
}
int main() {
assert(probablyFloat(1065353216));
assert(probablyFloat(1084227584));
assert(probablyFloat(1085276160));
assert(probablyFloat(1068149391));
assert(probablyFloat(1083179008));
assert(probablyFloat(1120403456));
assert(probablyFloat(0));
assert(probablyFloat(-1110651699));
assert(probablyFloat(1195593728));
return 0;
}
simplifying what Alan said, I'd ONLY look at the integer form. and say, if the number is bigger than 99999999 then it's almost definitely a float.
This has the advantage that it's fast, easy, and avoids nan issues.
It has the disadvantage that it pretty much full of crap... i didn't actually look at what floats these will represent or anything, but it looks reasonable from your examples...
In any case, this is a heuristic, so it's GONNA be full of crap, and not always work anyway...
Measure with a micrometer, mark with chalk, cut with an axe.
Here is a heuristic I came up with, based on #kriss' idea. After a brief look at some of my data, it seems to work fairly well.
I am using it in a disassembler to detect if a 32-bit value was likely originally an integer or float literal.
public class FloatUtil {
private static final int canonicalFloatNaN = Float.floatToRawIntBits(Float.NaN);
private static final int maxFloat = Float.floatToRawIntBits(Float.MAX_VALUE);
private static final int piFloat = Float.floatToRawIntBits((float)Math.PI);
private static final int eFloat = Float.floatToRawIntBits((float)Math.E);
private static final DecimalFormat format = new DecimalFormat("0.####################E0");
public static boolean isLikelyFloat(int value) {
// Check for some common named float values
if (value == canonicalFloatNaN ||
value == maxFloat ||
value == piFloat ||
value == eFloat) {
return true;
}
// Check for some named integer values
if (value == Integer.MAX_VALUE || value == Integer.MIN_VALUE) {
return false;
}
// a non-canocical NaN is more likely to be an integer
float floatValue = Float.intBitsToFloat(value);
if (Float.isNaN(floatValue)) {
return false;
}
// Otherwise, whichever has a shorter scientific notation representation is more likely.
// Integer wins the tie
String asInt = format.format(value);
String asFloat = format.format(floatValue);
// try to strip off any small imprecision near the end of the mantissa
int decimalPoint = asFloat.indexOf('.');
int exponent = asFloat.indexOf("E");
int zeros = asFloat.indexOf("000");
if (zeros > decimalPoint && zeros < exponent) {
asFloat = asFloat.substring(0, zeros) + asFloat.substring(exponent);
} else {
int nines = asFloat.indexOf("999");
if (nines > decimalPoint && nines < exponent) {
asFloat = asFloat.substring(0, nines) + asFloat.substring(exponent);
}
}
return asFloat.length() < asInt.length();
}
}
And here are some of the values it works for (and a couple it doesn't)
#Test
public void isLikelyFloatTest() {
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1.23f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1.0f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(Float.NaN)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(Float.NEGATIVE_INFINITY)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(Float.POSITIVE_INFINITY)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1e-30f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1000f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(-1f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(-5f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1.3333f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(4.5f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(.1f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(50000f)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(Float.MAX_VALUE)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits((float)Math.PI)));
Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits((float)Math.E)));
// Float.MIN_VALUE is equivalent to integer value 1. this should be detected as an integer
// Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(Float.MIN_VALUE)));
// This one doesn't quite work. It has a series of 2 0's, but we only strip 3 0's or more
// Assert.assertTrue(FloatUtil.isLikelyFloat(Float.floatToRawIntBits(1.33333f)));
Assert.assertFalse(FloatUtil.isLikelyFloat(0));
Assert.assertFalse(FloatUtil.isLikelyFloat(1));
Assert.assertFalse(FloatUtil.isLikelyFloat(10));
Assert.assertFalse(FloatUtil.isLikelyFloat(100));
Assert.assertFalse(FloatUtil.isLikelyFloat(1000));
Assert.assertFalse(FloatUtil.isLikelyFloat(1024));
Assert.assertFalse(FloatUtil.isLikelyFloat(1234));
Assert.assertFalse(FloatUtil.isLikelyFloat(-5));
Assert.assertFalse(FloatUtil.isLikelyFloat(-13));
Assert.assertFalse(FloatUtil.isLikelyFloat(-123));
Assert.assertFalse(FloatUtil.isLikelyFloat(20000000));
Assert.assertFalse(FloatUtil.isLikelyFloat(2000000000));
Assert.assertFalse(FloatUtil.isLikelyFloat(-2000000000));
Assert.assertFalse(FloatUtil.isLikelyFloat(Integer.MAX_VALUE));
Assert.assertFalse(FloatUtil.isLikelyFloat(Integer.MIN_VALUE));
Assert.assertFalse(FloatUtil.isLikelyFloat(Short.MIN_VALUE));
Assert.assertFalse(FloatUtil.isLikelyFloat(Short.MAX_VALUE));
}