How to convert single char to double - c++

I try to create a program that can evaluate simple math expression like "4+4". The expression is given from the user.
The program saves it in a char* and then searches for binary operation (+,-,*,:) and does the operation.
The problem is that I can't figure out how to convert the single char into a double value.
I know there is the atof function but I want to convert single char.
There is a way to do that without creating a char*?

A char usually represents a character. However, a single char is simply an integer in range of at least [-127,+127] (signed version) or at least [0,255] (unsigned version).
If you obtained a character looking as a digit, the value stored in it is an ASCII number representing it. Digits start at code 48 (for zero) and go up incrementally till code 57 (for nine). Thus, if you take the code and subtract 48, you get the integer value. From there, converting it to double is a matter of casting.
Thus:
char digit = ...
double value = double(digit - 48);
or even better, for convenience:
char digit = ...
double value = double(digit - '0'); //'0' has a built-in value 48

There is a way to do that without creating a char* ???
Sure. You can extract the digit number from a single char as follows:
char c = '4';
double d = c - '0';
// ^^^^^^^ this expression results in a numeric value that can be converted
// to double
This uses the circumstance that certain character tables like ASCII or EBCDIC encode the digits in a continuous set of values starting at '0'.

Related

How to convert an array of ASCII codes to int C++

First of all, i would like to read from plain text, i read hundreds of webpages about it and i just can't make it. I want to read every byte of the file and every two byte is a number what i want to store.
I want to read: 10 20.
I get: ASCII code of 1, ASCII code of 0, ASCII code of space etc. etc.
I tried several things, like stream.get, or stream.read, tried to convert with atoi but then i can't concatenate the two digits, i tried sprintf but all of them failed.
Array of ASCII codes:
char ASCII[] = "10 20";
Convert to integer variables:
std::istringstream iss(ASCII);
int x,y;
iss >> x >> y;
Done.
Here's the working sample: http://ideone.com/y8ZRGs
If you want to do this with your own code, there are only two things you need to be able to do.
First, you need to convert from the ASCII code of a digit to the number it represents. This is as simple as subtracting '0'.
Second, you need to convert from the numerical value of each digit of a two digit number to the number that represents. This is simple -- if T is the tens place and U is the units, it's 10T + U.
So, for example:
int twoDigitNumber (char tens, char units)
{
return 10 * (tens - '0') + (units - '0');
}

Why was the C++ string converted to int?

In the following code, I can not understand why the string is converted to int in this way.
Why is it using a sum with 0 ?
string mystring;
vector<int> myint;
mystring[i+1]=myint[i]+'0';
This code converts an int (presumably a digit) to the character that represents it.
Since characters are sequential, and chars can be treated as integers, the character representing a certain digit can, in fact, be described by its distance from '0'. This way, 0 turns turn to the character '0', '5' is the character that is greater than '0' by five, and so on.
This is an efficient, old school and dangerous method to get a char representation of a single digit. '0' will be converted to an int containing its ASCII code (0x30 for '0') and then that is added to myint[i]. If myint[i] is 9 or lower, you can cast myint[i] to a char you will get the resulting digit as text.
Things will not go as expected if you add more than 9 to '0'
You can also get a number from its char representation :
char text = '5';
int digit = text - '0';
The '0' expression isn't string type, it's char type that stores characters of ASCII and also can represent numbers from 0 to 255. So, in arithmetic operations char behaves like integer type.
In C strings a represent as arrays of char: static (char str[N]) or dynamic (char *str = new char[n]). String literals puts into double quotes ("string").
So, '0' is char and "0" is char[1]

Casting an int to a char. Not storing the correct value

I'm trying to store a number as a character in a char vector named code
code->at(i) = static_cast<char>(distribution(generator));
However it is not storing the way I think it should
for some shouldn't '\x4' be the ascii value for 4? if not how do I achieve that result?
Here's another vector who's values were entered correctly.
You are casting without actually converting the int to a char. You need:
code->at(i) = distribution(generator) + '0';
No. \xN does not give you the ASCII code for the character N.
\xN is the ASCII character† whose code is N (in hexadecimal form).
So, when you write '\x4', you get the [unprintable] character with the ASCII code 4. Upon conversion to an integer, this value is still 4.
If you wanted the ASCII character that looks like 4, you'd write '\x34' because 34 is 4's ASCII code. You could also get there using some magic, based on numbers in ASCII being contiguous and starting from '0':
code->at(i) = '0' + distribution(generator);
† Ish.

convert char[] of hexadecimal numbers to char[] of letters corresponding to the hexadecimal numbers in ascii table and reversing it

I have a char a[] of hexadecimal characters like this:
"315c4eeaa8b5f8aaf9174145bf43e1784b8fa00dc71d885a804e5ee9fa40b16349c146fb778cdf2d3aff021dfff5b403b510d0d0455468aeb98622b137dae857553ccd8883a7bc37520e06e515d22c954eba5025b8cc57ee59418ce7dc6bc41556bdb36bbca3e8774301fbcaa3b83b220809560987815f65286764703de0f3d524400a19b159610b11ef3e"
I want to convert it to letters corresponding to each hexadecimal number like this:
68656c6c6f = hello
and store it in char b[] and then do the reverse
I don't want a block of code please, I want explanation and what libraries was used and how to use it.
Thanks
Assuming you are talking about ASCII codes. Well, first step is to find the size of b. Assuming you have all characters by 2 hexadecimal digits (for example, a tab would be 09), then size of b is simply strlen(a) / 2 + 1.
That done, you need to go through letters of a, 2 by 2, convert them to their integer value and store it as a string. Written as a formula you have:
b[i] = (to_digit(a[2*i]) << 4) + to_digit(a[2*i+1]))
where to_digit(x) converts '0'-'9' to 0-9 and 'a'-'z' or 'A'-'Z' to 10-15.
Note that if characters below 0x10 are shown with only one character (the only one I can think of is tab, then instead of using 2*i as index to a, you should keep a next_index in your loop which is either added by 2, if a[next_index] < '8' or added by 1 otherwise. In the later case, b[i] = to_digit(a[next_index]).
The reverse of this operation is very similar. Each character b[i] is written as:
a[2*i] = to_char(b[i] >> 4)
a[2*i+1] = to_char(b[i] & 0xf)
where to_char is the opposite of to_digit.
Converting the hexadecimal string to a character string can be done by using std::substr to get the next two characters of the hex string, then using std::stoi to convert the substring to an integer. This can be casted to a character that is added to a std::string. The std::stoi function is C++11 only, and if you don't have it you can use e.g. std::strtol.
To do the opposite you loop over each character in the input string, cast it to an integer and put it in an std::ostringstream preceded by manipulators to have it presented as a two-digit, zero-prefixed hexadecimal number. Append to the output string.
Use std::string::c_str to get an old-style C char pointer if needed.
No external library, only using the C++ standard library.
Forward:
Read two hex chars from input.
Convert to int (0..255). (hint: sscanf is one way)
Append int to output char array
Repeat 1-3 until out of chars.
Null terminate the array
Reverse:
Read single char from array
Convert to 2 hexidecimal chars (hint: sprintf is one way).
Concat buffer from (2) to final output string buffer.
Repeat 1-3 until out of chars.
Almost forgot to mention. stdio.h and the regular C-runtime required only-assuming you're using sscanf and sprintf. You could alternatively create a a pair of conversion tables that would radically speed up the conversions.
Here's a simple piece of code to do the trick:
unsigned int hex_digit_value(char c)
{
if ('0' <= c && c <= '9') { return c - '0'; }
if ('a' <= c && c <= 'f') { return c + 10 - 'a'; }
if ('A' <= c && c <= 'F') { return c + 10 - 'A'; }
return -1;
}
std::string dehexify(std::string const & s)
{
std::string result(s.size() / 2);
for (std::size_t i = 0; i != s.size(); ++i)
{
result[i] = hex_digit_value(s[2 * i]) * 16
+ hex_digit_value(s[2 * i + 1]);
}
return result;
}
Usage:
char const a[] = "12AB";
std::string s = dehexify(a);
Notes:
A proper implementation would add checks that the input string length is even and that each digit is in fact a valid hex numeral.
Dehexifying has nothing to do with ASCII. It just turns any hexified sequence of nibbles into a sequence of bytes. I just use std::string as a convenient "container of bytes", which is exactly what it is.
There are dozens of answers on SO showing you how to go the other way; just search for "hexify".
Each hexadecimal digit corresponds to 4 bits, because 4 bits has 16 possible bit patterns (and there are 16 possible hex digits, each standing for a unique 4-bit pattern).
So, two hexadecimal digits correspond to 8 bits.
And on most computers nowadays (some Texas Instruments digital signal processors are an exception) a C++ char is 8 bits.
This means that each C++ char is represented by 2 hex digits.
So, simply read two hex digits at a time, convert to int using e.g. an istringstream, convert that to char, and append each char value to a std::string.
The other direction is just opposite, but with a twist.
Because char is signed on most systems, you need to convert to unsigned char before converting that value again to hex digits.
Conversion to and from hexadecimal can be done using hex, like e.g.
cout << hex << x;
cin >> hex >> x;
for a suitable definition of x, e.g. int x
This should work for string streams as well.

Unexpected results when looking at ASCII codes in C++

The bit of code below is extracting ASCII codes from characters.
When I convert characters in the normal ASCII region I get the value I expect.
When I convert £ and € from the extened region I get a load of 1's padding the INT that I'm storing the character in.
e.g. the output of the below is:
45 (ascii E as expected)
FFFFFF80 (extended ascii € as expected but padded with ones)
It's not causing me an issue but I'm just wondering why this happens.
Here's the code...
unsigned int asciichar[3];
string cTextToEncode = "E€";
for (unsigned int i = 0; i < cTextToEncode.length(); i++)
{
asciichar[i] = (unsigned int)cTextToEncode[i];
cout << hex << asciichar[i] << "\n";
}
Can anyone explain why this is?
Thanks
depending on the implementation a char can be either signed or unsigned. In your case they appear to be signed, so 0x80 is interpreted as -128 instead of 128, hence when cast to an integer it becomes 0xffffff80.
btw, this has nothing at all to do with ASCII
First, there's no € in ASCII (extended or otherwise) because the euro didn't exist when ASCII was created. However, several ASCII-friendly 8-bit encodings do support the € character, but the conversion is done by your source code editor (the compiler merely sees a byte which happens to represent € in your editor, but might be something else entirely on, say, a computer in Israel).
Second, (unsigned int) casts do not extract the ASCII encoding of a character. They merely convert the value of the underlying numeric char type to an unsigned integer. This causes strange things to happen when the converted value is negative - on your compiler, char happens to be signed char and thus characters with an ASCII value larger than 127 end up being negative char values.
You should convert to an unsigned char first, and then to an unsigned int.
You should be careful when promoting signed values.
When promoting signed char to signed int a first bit (sign bit) is taken into account. The algorithm is roughly look like this:
1) If you have 1X-XX-XX-XX (char in binary, X - any binary digit) then int will be (starts with 24 ones) 1...1-1X-XX-XX-XX (binary) -> 0xFFFFFFYY (hex)
2) if you have 0X-XX-XX-XX (binary), then you'll have (starts with 24 zeroes) 0...0-0X-XX-XX-XX (binary) -> 0x000000YY (hex).
In your case you want to force rule #2 all the time. In order to do this, you need to tell compiler to ignore first bit (sign bit). For this you need to use unsigned char.