Conversion from int to char failed and console prints weird symbol - c++

I am having a weird issue and I don't know how to explain it. When I run this code it prints this symbol -> .
This is my code:
#include <iostream>
int main() {
int num = 1;
char number = num;
std::cout<<number<<std::endl;
system("PAUSE");
return 0;
}
I don't understand why. Normally it should convert the integer to char. I am using Dev C++ and my language standard is ISO C++11. I am programming for 4 years now and this is the first time I get something like this. I hope I explained my issue and if someone can help me I will be grateful.

Conversion from int to char failed
Actually, int was successfully converted to char.
Normally it should convert the integer to char.
That's what it did. The result of the conversion is char with the value 1.
Computers use a "character encoding". Each symbol that you see on the screen is encoded as a number. For example (assuming ASCII or compatible encoding) the value of 'a' character is 97.
A char with value of 1 is not the same as char with the value that encodes the character '1'. As such, when you print a character with value 1, you don't see the number 1, but the character that the value 1 encodes. In the ASCII and compatible encodings, 1 encodes a non-visible symbol "start of heading".
I wanted to print 1 as a char.
You can do it like this:
std::cout << '1' << '\n';

Then, since 4 years you seem to misunderstand what char is. It's not directly a character, but a number. It's the encoding that turns that number into a readable character.
Essentially, char i=1 is not the same as char i='1' (ascii table).

Related

Slicing string character correctly in C++

I'd like to count number 1 in my input, for example,111 (1+1+1) must return 3and
101must return 2 (1+1)
To achieve this,I developed sample code as follows.
#include <iostream>
using namespace std;
int main(){
string S;
cout<<"input number";
cin>>S;
cout<<"S[0]:"<<S[0]<<endl;
cout<<"S[1]:"<<S[1]<<endl;
cout<<"S[2]:"<<S[2]<<endl;
int T = (int) (S[0]+S[1]+S[2]);
cout<<"T:"<<T<<endl;
return 0;
}
But when I execute this code I input 111 for example and my expected return is 3 but it returned 147.
[ec2-user#ip-10-0-1-187 atcoder]$ ./a.out
input number
111
S[0]:1
S[1]:1
S[2]:1
T:147
What is the wrong point of that ? I am totally novice, so that if someone has opinion,please let me know. Thanks
It's because S[0] is a char. You are adding the character values of these digits, rather than the numerical value. In ASCII, numerical digits start at value 48. In other words, each of your 3 values are exactly 48 too big.
So instead of doing 1+1+1, you're doing 49+49+49.
The simplest way to convert from character value to digit is to subtract 48, which is the value of 0.
e.g, S[0] - '0'.
Since your goal is to count the occurrences of a character, it makes no sense to sum the characters together. I recommend this:
std::cout << std::ranges::count(S, '1');
To explain the output that you get, characters are integers whose values represent various symbols (and non-printable control characters). The value that represents the symbol '1' is not 1. '1'+'1'+'1' is not '3'.

How to subtract integers from characters in C?

Brian Kernighnan in his book Programming with C says
By definition, chars are just small integers, so char variables and
constants are identical to ints in arithmetic expressions.
Does this mean we can subtract char variable from int ??
I wrote a small piece of code:
#include <stdio.h>
main()
{
int a ;
int c;
a = 1;
c = 1 - '0' ;
printf("%d", c);
}
However it gives me output = -47...
What is that I'm doing wrong ?? Are the variables I assigned have the right type??
The output is to be expected. '0' is a char value that, since your compiler presumably uses the ASCII encoding, has value 48. This is converted to int and subtracted from 1. Which gives the value -47.
So the program does what it is expected to do, just not what you might hope it would do. As for what you are doing wrong, it is hard to say. I'm not sure what you expect the program to do, or what problem you are trying to solve.
The characters from '0'-'9'' have values 48-57 when converted to integer ('0' = 48, '1' = 49 etc). Read more about ASCII Values. When used in numerical calculation, first they are converted to int, so 1- '0' = 1-48 =-47.
You're mixing here the actual operation with the form of representation. printf outputs the data according to the specified format - integer in your case. If you want to print it as a character, switch %d with %c.
What you are doing is treating with the ASCII code of the chars, each char has an ASCII value assigned.
Now, playing a little with the ASCII of each char you can do things like:
int a;
a = 'a' - 'A' ;
printf("%d", a);
And get 32 as output, due to the ASCII value to 'a' = 97 and for 'A' = 65, then you have 97-65 = 32
I think this gives you more clear understanding...
#include <stdio.h>
main()
{
int a ;
a = 1;
printf("%d", char(a+48));
//or printf("%d", char(a+'0'));
}

Converting an integer to to ascii values C++

I am attempting to turn a number into letters using ascii, at the moment I can do it one letter at a time:
EDIT: The output of an RSA encryption that I've been working on is currently in the form of an integer, I'm trying to work out how to convert it to the word/sentence which was the original input. I've nearly finished but I'm completely stuck at the last "hurdle". I'm adding context due to a comment asking why I would want to do this (or words to that effect).
EDIT: If during the encryption process I used the ASCII value - 87, all letters would be 2 digits long, eliminating the problem of some ASCII characters being 3 letters and some being 2, does this make the problem more approachable? (it limits me to only letter but that's fine for its purpose)
#include <string>
#include <iostream>
char returnChar(int x)
{
return (char) x;
}
int main()
{
std::cout << returnChar (119);
}
This converts 32 --> w.
How could I adapt this function to allow me to change "3232" --> "ww" or any other integer to ascii characters, e.g. "32242713" --> "word".
EDIT: I think using some kind of mod function to split it into chunks of two numbers which could then be converted to characters might work?
How do I overcome the problem of some ascii characters having 2 digits and some having 3 digits? I think this problem has been solved as described in the second edit
If you can see that I've approached this in entirely the wrong way, could you suggest a viable alternative approach for me to try please?
Thanks for any feedback.
What you're asking for is not possible. You have a few alternatives:
Change the int to a string and put white spaces/other characters inside the string:
std::string test = "119 119";
Convert the total value to binary, and parse byte by byte:
unsigned int test = 30583; // 119*256+119
char a = (test>>8)&0xff;
char b = test&0xff;
Pass the data in a vector and convert one element at a time:
std::vector<char> returnChar(const std::vector<int> &data){
std::vector<char> output;
for(unsigned int i=0;i<data.size();i++)
output.push_back(char(data[i]));
return output;
}
I would probably stick with the second method, since - a wild guess here - it shouldn't change much things inside where you actually generate the numbers.

Regarding conversion of text to hex via ASCII in C++

So, I've looked up how to do conversion from text to hexadecimal according to ASCII, and I have a working solution (proposed on here). My problem is that I don't understand why it works. Here's my code:
#include <string>
#include <iostream>
int main()
{
std::string str1 = "0123456789ABCDEF";
std::string output[2];
std::string input;
std::getline(std::cin, input);
output[0] = str1[input[0] & 15];
output[1] = str1[input[0] >> 4];
std::cout << output[1] << output[0] << std::endl;
}
Which is all well and good - it returns the hexadecimal value for single characters, however, what I don't understand is this:
input[0] & 15
input[0] >> 4
How can you perform bitwise operations on a character from a string? And why does it oh-so-nicely return the exact values we're after?
Thanks for any help! :)
In C++ a character is 8 bits long.
If you '&' it with 15 (binary 1111), then the least significant 4 bits are outputted to the first digit.
When you apply right shift by 4, then it is equivalent of dividing the character value by 16. This gives you the most significant 4 bits for second digit.
Once the above digit values are calculated, the required character is picked up from the constant string str1 having all the characters in their respective positions.
"Characters in a string" are not characters (individual strings of one character only). In some programming languages they are. In Javascript, for example,
var string = "testing 1,2,3";
var character = string[0];
returns "t".
In C and C++, however, 'strings' are arrays of 8-bit characters; each element of the array is an 8-bit number from 0..255.
Characters are just integers. In ASCII the character '0' is the integer 48. C++ makes this conversion implicitly in many contexts, including the one in your code.

Unexpected results when looking at ASCII codes in C++

The bit of code below is extracting ASCII codes from characters.
When I convert characters in the normal ASCII region I get the value I expect.
When I convert £ and € from the extened region I get a load of 1's padding the INT that I'm storing the character in.
e.g. the output of the below is:
45 (ascii E as expected)
FFFFFF80 (extended ascii € as expected but padded with ones)
It's not causing me an issue but I'm just wondering why this happens.
Here's the code...
unsigned int asciichar[3];
string cTextToEncode = "E€";
for (unsigned int i = 0; i < cTextToEncode.length(); i++)
{
asciichar[i] = (unsigned int)cTextToEncode[i];
cout << hex << asciichar[i] << "\n";
}
Can anyone explain why this is?
Thanks
depending on the implementation a char can be either signed or unsigned. In your case they appear to be signed, so 0x80 is interpreted as -128 instead of 128, hence when cast to an integer it becomes 0xffffff80.
btw, this has nothing at all to do with ASCII
First, there's no € in ASCII (extended or otherwise) because the euro didn't exist when ASCII was created. However, several ASCII-friendly 8-bit encodings do support the € character, but the conversion is done by your source code editor (the compiler merely sees a byte which happens to represent € in your editor, but might be something else entirely on, say, a computer in Israel).
Second, (unsigned int) casts do not extract the ASCII encoding of a character. They merely convert the value of the underlying numeric char type to an unsigned integer. This causes strange things to happen when the converted value is negative - on your compiler, char happens to be signed char and thus characters with an ASCII value larger than 127 end up being negative char values.
You should convert to an unsigned char first, and then to an unsigned int.
You should be careful when promoting signed values.
When promoting signed char to signed int a first bit (sign bit) is taken into account. The algorithm is roughly look like this:
1) If you have 1X-XX-XX-XX (char in binary, X - any binary digit) then int will be (starts with 24 ones) 1...1-1X-XX-XX-XX (binary) -> 0xFFFFFFYY (hex)
2) if you have 0X-XX-XX-XX (binary), then you'll have (starts with 24 zeroes) 0...0-0X-XX-XX-XX (binary) -> 0x000000YY (hex).
In your case you want to force rule #2 all the time. In order to do this, you need to tell compiler to ignore first bit (sign bit). For this you need to use unsigned char.