Unable to cast char to integer - casting

I have a variable FACTOR with an observation of 'FAC_11' I am trying the following:
select distinct factor from sp500_2003
where CAST(CAST(substr(factor,5,1) AS CHAR)AS INT) = 1;
I get the error message:
Invalid character string format for type INTEGER.
although
select distinct factor from sp500_2003 where CAST(substr(factor,5,1) AS CHAR) = '1';
The documentation says we should be able to cast a CHAR to an INT.

Related

cast(Field as INT) throwing Conversion failed error due to the decimal in the data

I have a tax_percent column with data like .75%, .5%
I need to do Sum on it.
I tried the below
Sum (cast ( left(d.pe_amt_pct, len(d.pe_amt_pct)-1) as Int) )
Error:
Conversion failed when converting the varchar value .75 to Int
cast as int, convert(int, Filed) both not working due to the decimal in the data. How to do the sum on this field?
I searched the forum, but didn't find any with decimal causing the issue.
I am in SQL server 2012
I am not sure if you are wanting the values to be summed as "75 + 5 + ..." or ".75 + .5 + ..."?
As your data is percentages stored strings I am assuming the latter case - ".75%" being a fraction of one percent.
I would use the following code:
SUM(CAST(REPLACE(d.pe_amt_pct, '%', '') AS FLOAT))
I am using CAST( AS FLOAT) so it will accept the decimal in the data.
This would give a floating point SUMMED value that you could then convert to an INT if required.
Hope this helps.

How to convert single char to double

I try to create a program that can evaluate simple math expression like "4+4". The expression is given from the user.
The program saves it in a char* and then searches for binary operation (+,-,*,:) and does the operation.
The problem is that I can't figure out how to convert the single char into a double value.
I know there is the atof function but I want to convert single char.
There is a way to do that without creating a char*?
A char usually represents a character. However, a single char is simply an integer in range of at least [-127,+127] (signed version) or at least [0,255] (unsigned version).
If you obtained a character looking as a digit, the value stored in it is an ASCII number representing it. Digits start at code 48 (for zero) and go up incrementally till code 57 (for nine). Thus, if you take the code and subtract 48, you get the integer value. From there, converting it to double is a matter of casting.
Thus:
char digit = ...
double value = double(digit - 48);
or even better, for convenience:
char digit = ...
double value = double(digit - '0'); //'0' has a built-in value 48
There is a way to do that without creating a char* ???
Sure. You can extract the digit number from a single char as follows:
char c = '4';
double d = c - '0';
// ^^^^^^^ this expression results in a numeric value that can be converted
// to double
This uses the circumstance that certain character tables like ASCII or EBCDIC encode the digits in a continuous set of values starting at '0'.

Unknown integer conversion for string length >10 in cocos2d/Xcode applications(iOS 7.0)

I am working with cocos2d 2.0 and iOS7.0. And while trying to get the integer value or float value of a string with larger length(usually > 10), I'm always getting some unknown outputs as below:
when string length <= 10:
NSString *amount = #"1234567890";
NSLog(#"AmountStr=|%#|",amount);
NSLog(#"Amount =|%d|",[amount integerValue]);
Output(getting correct integer value):
AmountStr=|1234567890|
Amount =|1234567890| --
But, when string length >10, that is :
NSString *amount = #"12345678901"; -- added a '1' after the string, so length = 11
NSLog(#"AmountStr=|%#|",amount);
NSLog(#"Amount =|%d|",[amount integerValue]);
then I am getting the output as :
AmountStr=|12345678901| -- This is correct
Amount =|2147483647| -- But what about this..!!! :O
I have tried integerValue, intValue, and floatValue. Every time, same error occurs. So how do I findout the int value of a string with length greater than 10. Please help me.
NSLog(#"Amount =|%lli|",[amount longLongValue]);
You're trying to print a number as an integer which is larger than the largest number an integer can hold. It's not even about number of digits. Trying to do this with 3000000000 would replicate the "error".
There's also doubleValue method for NSString, which will give you more significant digits than floatValue.
Moreover, I'm a little surprised that using %d with the call to integerValue even works. intValue returns an int. But integerValue returns an NSInteger. Normally, when using format specifiers with NSInteger, you need to use %ld and cast the NSInteger to a long...
And for up to 38-digits, you can always use NSDecimalNumber.
NSDecimalNumber *myNum = [NSDecimalNumber decimalNumberWithString:amount];
NSLog(#"%#", [myNum descriptionWithLocale:[NSLocale systemLocale]]);

Initialising an int64 variable

I tried initializing an int64 variable in the following way :
let k:int64 = 4000000000;;
However I got the following error message :
Error: Integer literal exceeds the range of representable integers of type int
How do I intialise k to a value of 4 billion? Thanks.
You should use L specifier to indicate an int64 literal:
let k = 4000000000L;;
Alternatively, since the number exceeds the range of int32, you can convert it from float:
let k = Int64.of_float 4000000000.;;

32bit int * 32bit int = 64 bit int?

In other words does this work as expected?
int32 i = INT_MAX-1;
int64 j = i * i;
or do I need to cast the i to 64 bit first?
You need to cast at least one of the operands to the multiply. At the point the multiply is being done, the system doesn't know you're planning to assign to an int64.
(Unless int64 is actually the native int type for your particular system, which seems unlikely)
It depends on what int32 and int64 are.
In brief, all integers are promoted to at least 'int' size (which may be 64 bits) before any arithmetic operations, and to the size of the larger operand for binary operators if this is of greater rank than an int.
How the result of an expression is used (whether or not it is stored to a wider type) has no bearing on the promotions of the constituent parts of the expression.
The basic answer is no it will not do what you want.
But it does do what is expected.
Two things to note about mathematical operations:
Both operands will be the same type.
The resulting type will be the same as the input.
If the compiler notes a mismatch between the operands it will convert one of the operands so that both match (see Which variables should I typecast when doing math operations in C/C++?). Note: This is done in isolation to what happens to the result.
Given two numbers a,b and each number uses len_a and len_b bits.
Your output datatype needs at least: len_a and len_b bits.
In your above code, you have two 31 bit numbers ( because INT_MAX - 1 = 0x7FFFFFFE uses 31 bits ) and you will need to typecast one of them to int64_t because it will do a 32 bit multiply and overflow before it casts to int64_t.
The number of bits needed for fixed point multiplication:
len_output = howManyBits( a * b )
= len_a + len_b
A quick example to show the above rule in action:
a = 7
len_a = 3
b = 7
len_b = 3
c = a * b
= 49 ( decimal )
= 0x31 ( hex )
len_c = howManyBits( 0x31 )
= 6
You can write a function to count bits. Or if you just want a quick sanity check to confirm this use something like Windows Calc that will convert the number into binary form and count the bits used.
See: 14. Mixed use of simple integer types and memsize types.
http://www.viva64.com/art-1-2-599168895.html