DB2 casting to DECIMAL - casting

I'm trying to cast a number to DECIMAL(10,2).
If I understand well, a number like "1234567890" should be a valid
DECIMAL(10,2).
But when I try to cast it, it returns an -413 error:
"OVERFLOW OR UNDERFLOW OCCURRED DURING NUMERIC DATA TYPE CONVERSION"
This is the SQL:
select CAST( 1234567890 AS DECIMAL(10,2)) from SYSIBM.SYSDUMMY1;
As I see it, the problem is that it is adding two zero decimal positions,
so the total length is 12, which is not valid number for a (10,2).
In fact, if I try to CAST it to (12,2) it works (showing "1234567890.00" ).
I wonder if this is an error or if I'm doing something wrong.
CASTING to (12,2) does not seem to be a valid option, because it will
accept numbers like "1234567890.12" that should not be valid for a (10,2).
Thanks in advance.

The DECIMAL data type is specified by providing two numbers:
The first integer is the precision of the number; that is, the total number of digits; it may range from 1 to 31.
The second integer is the scale of the number; that is, the number of digits to the right of the decimal point; it may range from 0 to the precision of the number.
In your case DECIMAL(10,2) means a total of 10 digits with 2 of them used for the scale. The number 1234567890 therefore does not fit, 12345678.90 would.

Related

C++ set precision of a double (not for output)

Alright so I am trying to truncate actual values from a double with a given number of digits precision (total digits before and after, or without, decimal), not just output them, not just round them. The only built in functions I found for this truncates all decimals, or rounds to given decimal precision.
Other solutions I have found online, can only do it when you know the number of digits before the decimal, or the entire number.
This solution should be dynamic enough to handle any number. I whipped up some code that does the trick below, however I can't shake the feeling there is a better way to do it. Does anyone know of something more elegant? Maybe a built in function that I don't know about?
I should mention the reason for this. There are 3 different sources of observed values. All 3 of these sources agree to some level in precision. Such as below, they all agree within 10 digits.
4659.96751751236
4659.96751721355
4659.96751764253
However I need to only pull from 1 of the sources. So the best approach, is to only use up to the precision all 3 sources agree on. So its not like I am manipulating numbers and then need to truncate precision, they are observed values. The desired result is
4659.967517
double truncate(double num, int digits)
{
// check valid digits
if (digits < 0)
return num;
// create string stream for full precision (string conversion rounds at 10)
ostringstream numO;
// read in number to stream, at 17+ precision things get wonky
numO << setprecision(16) << num;
// convert to string, for character manipulation
string numS = numO.str();
// check if we have a decimal
int decimalIndex = numS.find('.');
// if we have a decimal, erase it for now, logging its position
if(decimalIndex != -1)
numS.erase(decimalIndex, 1);
// make sure our target precision is not higher than current precision
digits = min((int)numS.size(), digits);
// replace unwanted precision with zeroes
numS.replace(digits, numS.size() - digits, numS.size() - digits, '0');
// if we had a decimal, add it back
if (decimalIndex != -1)
numS.insert(numS.begin() + decimalIndex, '.');
return atof(numS.c_str());
}
This will never work since a double is not a decimal type. Truncating what you think are a certain number of decimal digits will merely introduce a new set of joke digits at the end. It could even be pernicious: e.g. 0.125 is an exact double, but neither 0.12 nor 0.13 are.
If you want to work in decimals, then use a decimal type, or a large integral type with a convention that part of it holds a decimal portion.
I disagree with "So the best approach, is to only use up to the precision all 3 sources agree on."
If these are different measurements of a physical quantity, or represent rounding error due to different ways of calculating from measurements, you will get a better estimate of the true value by taking their mean than by forcing the digits they disagree about to any arbitrary value, including zero.
The ultimate justification for taking the mean is the Central Limit Theorem, which suggests treating your measurements as a sample from a normal distribution. If so, the sample mean is the best available estimate of the population mean. Your truncation process will tend to underestimate the actual value.
It is generally better to keep every scrap of information you have through the calculations, and then remember you have limited precision when outputting results.
As well as giving a better estimate, taking the mean of three numbers is an extremely simple calculation.

C++ Type of variables - value

I am the beginner, but I think same important thinks I should learn as soon as it possible.
So I have a code:
float fl=8.28888888888888888888883E-5;
cout<<"The value = "<<fl<<endl;
But my .exe file after run show:
8.2888887845911086e-005
I suspected the numbers to limit of the type and rest will be the zero, but I saw digits, which are random. Maybe it gives digits from memory after varible?
Could you explain me how it does works?
I suspected the numbers to limit of the type and rest will be the zero
Yes, this is exactly what happens, but it happens in binary. This program will show it by using the hexadecimal printing format %a:
#include <stdio.h>
int main(int c, char *v[]) {
float fl = 8.28888888888888888888883E-5;
printf("%a\n%a\n", 8.28888888888888888888883E-5, fl);
}
It shows:
0x1.5ba94449649e2p-14
0x1.5ba944p-14
In these results, 0x1.5ba94449649e2p-14 is the hexadecimal representation of the double closest to 8.28888888888888888888883*10-5, and 0x1.5ba944p-14
is the representation of the conversion to float of that number. As you can see, the conversion simply truncated the last digits (in this case. The conversion is done according to the rounding mode, and when the rounding goes up instead of down, it changes one or more of the last digits).
When you observe what happens in decimal, the fact that float and double are binary floating-point formats on your computer means that there are extra digits in the representation of the value.
I suspected the numbers to limit of the type and rest will be the zero
That is what happens internally. Excess bits beyond what the type can store are lost.
But that's in the binary representation. When you convert it to decimal, you can get trailing non-zero digits.
Example:
0b0.00100 is 0.125 in decimal
What you're seeing is a result of the fact that you cannot exactly represent a floating-point number in memory. Because of this, floats will be stored as the nearest value that can be stored in memory. A float usually has 24 bits used to represent the mantissa, which translates to about 6 decimal places (this is implementation defined though, so you shouldn't rely on this). When printing more than 6 decimal digits, you'll notice your value isn't stored in memory as the value you intended, and you'll see random digits.
So to recap, the problem you encountered is caused by the fact that base-10 decimal numbers cannot be represented in memory, instead the closest number to it is stored and this number will then be used.
each data type has range after this range all number is from memory or rubbish so you have to know this ranges and deal with it when you write code.
you can know this ranges from here or here

VBA debugger precision

I had a single which I believe the C++ equivalent is float in VBA in an Excel workbook module. Anyways, the value I originally assigned (876.34497) is rounded off to 876.345 in the Immediate Window, and Watch, and hover tooltip when I set a breakpoint on the VBA. However, if I pass this Single to a C++ DLL C++ reports it as the original value 876.34497.
So, is it actually stored in memory as the original value? Is this some limitation of the debugger? Unsure what is going on here. Makes it difficult to test if what I'm passing is what I'm getting on the C++ side.
I tried:
?CStr(test)
876.345
?CDbl(test)
876.344970703125
?CSng(test)
876.345
VBA isn't very straightforward, so at some level it must be stored as 876.34497 in memory. Otherwise, I don't think CDbl would be correct like it is.
VBA variables of type "single" are stored as "32-bit hardware implementation of IEEE 754[-]1985 [sic]." [see: https://msdn.microsoft.com/en-us/library/ee177324.aspx].
What this means in English is, "single" precision numbers are converted to binary then truncated to fit in a 4 byte (32-bit) sequence. The exact process is very well described in Wikipedia under http://en.wikipedia.org/wiki/Single-precision_floating-point_format . The upshot is that all single precision numbers are expressed as
(1) a 23 bit "fraction" between 0 and 1, *times*
(2) an 8-bit exponent which represents a multiplier between 2^(-127) and 2^128, *times*
(3) one more bit for positive or negative.
The process of converting numbers to binary and back causes two types of rounding errors:
(1) Significant Digits -- as you have noticed, there is a limit on significant digits. A 22 bit integer can only have 8,388,607 unique values. Stated another way, no number can be expressed with greater than +/- 0.000012% precision. Reaching back to high school science, you may recall that that is another way of saying you cannot count on more than six significant digits (well, decimal digits, at least ... of course you have 22 significant binary digits). So any representation of a number with more than six significant digits will get rounded off. However, it won't get rounded off to the nearest decimal digit ... it will get rounded off to the nearest binary digit. This often causes some unexpected results (like yours).
(2) Binary conversion -- The other type of error is even more pernicious. There are some numbers with significantly less than six (decimal) digits that will get rounded off. For example, 1/5 in decimal is 0.2000000. It never gets "rounded off." But the same number in binary is 0.00110011001100110011.... repeating forever. (That sequence is equivalent to 1/8 + 1/16 + 1/16*(1/8+1/16) + 1/256*(1/8+1/16) ... ) If you used an arbitrary number of binary digits to represent 0.20, then converted it back to decimal, you will NEVER get exactly 0.20. For example, if you used eight bits, you would have 0.00110011 in binary which is:
0.12500000
0.06250000
0.00781250
+ 0.00390625
------------
0.19921875
No matter how many binary digits you use, you will never get exactly 0.20, because 0.20 cannot be expressed as the sum of powers of two.
That in a nutshell explains what's going on. When you assign 876.34497 to "test," it gets converted internally to:
1 10001000 0110110001011000010011
136 5,969,427
Which is (+1) * 2^(136-127) * (5,969,427)/(2^23)
Excel is automatically truncating the display of your single-precision number to show only six significant digits, because it knows that the seventh digit might be wrong. I can't tell you what the number is exactly because my excel doesn't display enough significant digits! But you get the point.
When you coerce the value into double precision, it uses the entire binary string and then adds another 4 bytes worth of zeroes to the end. It now allows you to display twice as many significant figures because it is double precision, but as you can see, the conversion from 8 decimal digits to 23 binary digits and then appending another long string of zeros has introduced some errors. Not really errors, if you understand what it's doing; just artifacts. After all, it's doing exactly what you told it to do ... you just didn't know what you were telling it to do!

How to determine the last nonzero decimal digit in a float?

I am writing a float printing and formatting library and want to avoid printing trailing zero digits.
For this I would like to accurately determine the last nonzero digit within the first N decimal places after the decimal point. I wonder whether there is a particular efficient way to do this.
This (non-trivial) problem has been fully solved. The idea is to print exactly enough digits so that if you converted the printed digits back to a floating-point number, you would get exactly the number you started with.
The relevant paper is "Printing Floating-Point Numbers Quickly and Accurately", by Robert Burger and R. Kent Dybvig. You can download it here.
You will have to convert the float to a string and then trim the trailing zeros.
I don't think this is very efficient, but i feel there is probably no simplier algorithm
std::cout.precision(n);
//where n is the digits you want to display after the decimal.
If there is a zero present before the precision limit but after decimal,
it will be avoided automatically.
eg. std::cout.precision(5);
then my conidition is evaluated to be 5.55000
only 5.55 will be printed
The obvious solution is to put all digits in a char[N] and check the last digit before printing. But I bet you have thought about that yourself.
The only other solution I can think of, is that the decimal part of 2^(-n) has n non-zero digits.
So if the last non-zero in the binary representation is 2^(-n), there will be exactly n non-zero digits in the decimal expansion. Therefor looking at the binary representation will tell you something about the decimal representation.
However, this is only a partial solution, as rounding could introduce additional trailing zeros.

Precision using VariantCopyInd

I am using VariantCopyInd . The source contains 1111.199999999. However after VariantCopyInd the value gets rounded off in the destination as 1111.200000. I would like to retain the original value . how can this be achieved ?
This has nothing to do with VariantCopyInd, but merely the fact that the literal as it exists in the code, has not exact representation in the floating point format used internally by COM Variants.
Therefore, there is no way to achieve what you want, except to use the CURRENCY type of variant. It will have limited precision, see MSDN:
http://msdn.microsoft.com/en-us/library/e305240e-9e11-4006-98cc-26f4932d2118(VS.85)
CURRENCY types use a decimal representation internally, just like the code literal. You will still have to provide an indirect initialization (from string, not a float/double literal) in code, to prevent any unwanted representation effects.
MSDN on CURRENCY:
A currency number stored as an 8-byte, two's complement integer, scaled by 10,000 to give a fixed-point number with 15 digits to the left of the decimal point and 4 digits to the right. This IDispatch::GetTypeInforesentation provides a range of 922337203685477.5807 to -922337203685477.5808.
The CURRENCY data type is useful for calculations involving money, or for any fixed-point calculation where accuracy is particularly important.
I found a very good link from msdn
enter link description here
The link clearly indicates any number whose length is greater than 15 will evaluate into incorrect results .
Take 2 cases
1) 101126.199999999 will store a correct value , since the length is 15 . No conversion or precision loss
2) 111.12345678912345 will store incorrect value since the length is 17 . Conversion will be done