I tried initializing an int64 variable in the following way :
let k:int64 = 4000000000;;
However I got the following error message :
Error: Integer literal exceeds the range of representable integers of type int
How do I intialise k to a value of 4 billion? Thanks.
You should use L specifier to indicate an int64 literal:
let k = 4000000000L;;
Alternatively, since the number exceeds the range of int32, you can convert it from float:
let k = Int64.of_float 4000000000.;;
Related
How to set the limit for the limit : (1≤n≤10^16)
for example
const int MAX_N = 1e16;
I am getting this error :
overflow in conversion from 'double' to 'int' changes value from '1.0e+16' to '2147483647'
Id you want to set a data type to its maxor min limit value, then you can use std::numeric_limits. Please read here about that.
And for having the maximum value for an int, you would write something like
int v = std::numeric_limits<int>::max();
I have a variable FACTOR with an observation of 'FAC_11' I am trying the following:
select distinct factor from sp500_2003
where CAST(CAST(substr(factor,5,1) AS CHAR)AS INT) = 1;
I get the error message:
Invalid character string format for type INTEGER.
although
select distinct factor from sp500_2003 where CAST(substr(factor,5,1) AS CHAR) = '1';
The documentation says we should be able to cast a CHAR to an INT.
bv2 stores the value as 00110001001100100000101000000000
//bv2 is initialized as
bv2 = BitVector( intVal = 0, size = 32 )
//then some bit operation is done
bv2=bv1^bv2
hex(int(bv2,2))
this is giving me error.However if I directly use hex(int('00110001001100100000101000000000',2)) it gives me hexadecimal result.
What is wrong here?
The base argument is used only for strings or bytes. BitVector has a proper __int__() method.
hex(int(bv2))
#FRob's answer to my recent question (to_float() and dividing errors) led me to analyze the float_pkg_c.vhdl, particularly the to_float method.
When trying the following operation:
variable denum : integer;
variable num : integer;
variable dividend : float (4 downto -27);
begin
dividend := to_float(num, 4, 27) / to_float(denum, 4, 27);
...
I keep getting this error: "Error (10454): VHDL syntax error at float_pkg_c.vhdl(3840): right bound of range must be a constant"
Now, at the mentioned line:
for I in fract'high downto maximum (fract'high - shift + 1, 0) loop
The variable fract is calculated based on the parameter fraction_width, which is 27 in my case, therefore a constant.
However, the shift variable is calculated based on the arg parameter (basically, a log2 of the absolute value of arg), which is the num variable in my case, therefore not a constant.
So, the error is clear, but my question is: How can I cast a integer variable to float?
This is the definition of to_float:
function to_float (
arg : INTEGER;
constant exponent_width : NATURAL := float_exponent_width; -- length of FP output exponent
constant fraction_width : NATURAL := float_fraction_width; -- length of FP output fraction
constant round_style : round_type := float_round_style) -- rounding option
What is even more confusing to me is that arg in the above definition is not required ti be a constant.
After spending a few hours reading up on synthesizing loops and trying to translate the to_float with integer arg I had a thought:
library ieee;
library ieee_proposed;
use ieee_proposed.float_pkg.all;
use ieee.numeric_std.all;
entity SM is
end entity;
architecture foo of SM is
-- From float_pkg_c.vhdl line 391/3927 (package float_pkg):
-- -- to_float (signed)
-- function to_float (
-- arg : SIGNED;
-- constant exponent_width : NATURAL := float_exponent_width; -- length of FP output exponent
-- constant fraction_width : NATURAL := float_fraction_width; -- length of FP output fraction
-- constant round_style : round_type := float_round_style) -- rounding option
-- return UNRESOLVED_float is
begin
UNLABELLED:
process
variable denum : integer;
variable num : integer;
variable dividend : float (4 downto -27);
begin
denum := 42;
num := 21;
dividend := to_float(TO_SIGNED(num,32), 4, 27) / to_float(TO_SIGNED(denum,32), 4, 27);
assert dividend /= 0.5
report "dividend = " & to_string(dividend)
severity NOTE;
wait;
end process;
end architecture;
I don't think you really want to synthesize the integer version of to_float. Unfolding the loop gives you a bunch of ripple adds for decrementing shiftr and adjusting arg_int. Trying to get rid of those operations leads you to a bit array style representation of an integer.
Note there is no loop in the to_float who's arg type is signed. It's likely the TO_SIGNED calls are simply seen as defining the number of bits representing the size of integers instead of implying additional hardware. You end up with something converting bit fields and normalizing, clamping to infinity, etc..
You cast to float using the to_float function overload you are already using.
Your variables num and denum are uninitialized and default to integer'left which is -2**31. The to_float function tries to convert negative numbers to positive to stay within the natural range of arg_int but integer'high is limited to 2**31-1 and can't represent -integer'low. Set them to an initial value other than the default and see what happens.
From float_pkg_c.vhdl:
if arg < 0 then
result (exponent_width) := '1';
arg_int := -arg; -- Make it positive.
I am working with cocos2d 2.0 and iOS7.0. And while trying to get the integer value or float value of a string with larger length(usually > 10), I'm always getting some unknown outputs as below:
when string length <= 10:
NSString *amount = #"1234567890";
NSLog(#"AmountStr=|%#|",amount);
NSLog(#"Amount =|%d|",[amount integerValue]);
Output(getting correct integer value):
AmountStr=|1234567890|
Amount =|1234567890| --
But, when string length >10, that is :
NSString *amount = #"12345678901"; -- added a '1' after the string, so length = 11
NSLog(#"AmountStr=|%#|",amount);
NSLog(#"Amount =|%d|",[amount integerValue]);
then I am getting the output as :
AmountStr=|12345678901| -- This is correct
Amount =|2147483647| -- But what about this..!!! :O
I have tried integerValue, intValue, and floatValue. Every time, same error occurs. So how do I findout the int value of a string with length greater than 10. Please help me.
NSLog(#"Amount =|%lli|",[amount longLongValue]);
You're trying to print a number as an integer which is larger than the largest number an integer can hold. It's not even about number of digits. Trying to do this with 3000000000 would replicate the "error".
There's also doubleValue method for NSString, which will give you more significant digits than floatValue.
Moreover, I'm a little surprised that using %d with the call to integerValue even works. intValue returns an int. But integerValue returns an NSInteger. Normally, when using format specifiers with NSInteger, you need to use %ld and cast the NSInteger to a long...
And for up to 38-digits, you can always use NSDecimalNumber.
NSDecimalNumber *myNum = [NSDecimalNumber decimalNumberWithString:amount];
NSLog(#"%#", [myNum descriptionWithLocale:[NSLocale systemLocale]]);