I was trying the Caesar Cipher problem and got stuck at a very beginner like looking bug, but I don't know why my code is behaving that way. I added an integer to a char and expect it to increase in value, but I get a negative number instead. Here is my code. Although I found a way around it, but why does this code behave this way?
#include <iostream>
using std::cout; using std::endl;
int main()
{
char ch ='w';
int temp;
temp = int(ch) + 9;
ch = temp;
cout<<temp<<endl;
cout<<(int)ch;
return 0;
}
Output:
128
-128
A signed char type can typically hold values from -128 to 127.
With a value 128 it overflows.
Related
So I am trying to do a code that will work as a cypher. It will take in the word to cypher as input and output (print) the coded word. The problematic snippet of my code is the for loop.
#include <stdio.h>
#include <ctype.h>
#include <string.h>
int main(void)
{
char input[] = "hello";
printf("hello\n");
printf("ciphertext: ");
for (int i = 0; i < 5; i++)
{
if(isalpha(input[i]))
{
int current = input[i];
int cypher = ((current + 1) % 26 )+current;
char out = (char)cypher;
printf("%c", out);
}
else
{
printf("%c", input[i]);
}
}
printf("\n");
}
The problem that I run into when debugging is that the value that ends up being stored in "out" seems correct, however whn it comes to printing it, it shows somehthing else entirely. I did look up quite a few things that I found on here , such as writing the code as such:
char out = (char)cypher;
char out= cypher + '0';
and so on but to no avail. The output should be ifmmp but rather i get j~rrx
Anything would help! thanks :)
You're getting the correct answer. 105 is the ASCII value 'i'. There is no difference. More precisely, the char type is defined as an integer. On virtually all compilers it is 8 bits in size. So an unsigned char can have a value between 0 and 255; a signed char can have a value between -128 and +127.
So when your out variable has the value 105, it has the value 'i'.
The output of your printf will be:
i
But if you look at the out variable in a debugger, you might see 105, depending on the debugger.
I have a similar problem as mentioned in by this guy
C++: Getting random negative values when converting char to int
but i just want to know how to solve the problem and the code should not get more slow. I welcome all suggestions.
I tried to make the char to be unsigned char but that didn't work, then i tried to use this code :
const char char_max = (char)(((unsigned char) char(-1)) / 2);
c = (num & char_max);
but the output was different and i don't know exactly what that code does.
I am still a student.
cout << "\nEnter any string : ";
cin >> s1;
for (char& c : s1)
{
num = c;
num= (num+rand())%256;
// const char char_max = (char)(((unsigned char) char(-1)) / 2);
//c = (num & char_max)
c = num;
}
cout <<"\n"<< s1;
I expect c to have normal ASCII values so that i can use it back to retrieve the original int value
Thanks!!
The language doesn't tell us whether char is signed or unsigned; this depends on your platform. Apparently, on yours (like many others), it is signed.
That means, assuming 8-bit bytes, its range is [-128, 127], not [0, 255].
Use an unsigned char throughout to deal with numbers in the range [0, 255].
(I can't suggest a specific change to your program, because you didn't let us see it.)
Following are different programs/scenarios using unsigned int with respective outputs. I don't know why some of them are not working as intended.
Expected output: 2
Program 1:
int main()
{
int value = -2;
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 4294967294
Program 2:
int main()
{
int value;
value = -2;
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 4294967294
Program 3:
int main()
{
int value;
std::cin >> value; // 2
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 2
Can someone explain why Program 1 and Program 2 don't work? Sorry, I'm new at coding.
You are expecting the cast from int to unsigned int to simply change the sign of a negative value while maintaining its magnitude. But that isn't how it works in C or C++. when it comes to overflow, unsigned integers follow modular arithmetic, meaning that assigning or initializing from negatives values such as -1 or -2 wraps around to the largest and second largest unsigned values, and so on. So, for example, these two are equivalent:
unsigned int n = -1;
unsigned int m = -2;
and
unsigned int n = std::numeric_limits<unsigned int>::max();
unsigned int m = std::numeric_limits<unsigned int>::max() - 1;
See this working example.
Also note that there is no substantial difference between programs 1 and 2. It is all down to the sign of the value used to initialize or assign to the unsigned integer.
Casting a value from signed to unsigned changes how the single bits of the value are interpreted. Lets have a look at a simple example with an 8 bit value like char and unsigned char.
The values of a character value range from -128 to 127. Including the 0 these are 256 (2^8) values. Usually the first bit indicates wether the value is negativ or positive. Therefore only the last 7 bits can be used to describe the actual value.
An unsigned character can't take any negative values because there is no bit to determine wether the value should be negative or positiv. Therfore its value ranges from 0 to 256.
When all bits are set (1111 1111) the unsigned character will have the value 256. However the simple character value will treat the first bit as an indicator for a negative value. Sticking to the two's complement this value will be -1.
This is the reason the cast from int to unsigned int does not what you expected it to do, but it does exactly what its supposed to do.
EDIT
If you just want to switch from negative to positive values write yourself a simple function like that
uint32_t makeUnsigned(int32_t toCast)
{
if (toCast < 0)
toCast *= -1;
return static_cast<uint32_t>(toCast);
}
This way you will convert your incoming int to an unsigned int with an maximal value of 2^32 - 1
I was looking through C++ Integer Overflow and Promotion, tried to replicate it, and finally ended up with this:
#include <iostream>
#include <stdio.h>
using namespace std;
int main() {
int i = -15;
unsigned int j = 10;
cout << i+j << endl; // 4294967291
printf("%d\n", i+j); // -5 (!)
printf("%u\n", i+j); // 4294967291
return 0;
}
The cout does what I expected after reading the post mentioned above, as does the second printf: both print 4294967291. The first printf, however, prints -5. Now, my guess is that this is printf simply interpreting the unsigned value of 4294967291 as a signed value, ending up with -5 (which would fit seeing that the 2's complement of 4294967291 is 11...11011), but I'm not 100% convinced that I did not overlook anything. So, am I right or is something else happening here?
Yes, you got it right. That's why printf() is generally unsafe: it interprets its arguments strictly according to the format string, ignoring their actual type.
Though the two snippets below have a slight difference in the manipulation of the find variable, still the output seems to be the same. Why so?
First Snippet
#include<iostream>
using namespace std;
int main()
{
int number = 3,find;
find = number << 31;
find *= -1;
cout << find;
return 0;
}
Second Snippet
#include<iostream>
using namespace std;
int main()
{
int number = 3,find;
find = number << 31;
find *= 1;
cout << find;
return 0;
}
Output for both snippets:
-2147483648
(according to Ideone: 1, 2)
In both your samples, assuming 32bit ints, you're invoking undefined behavior as pointed out in Why does left shift operation invoke Undefined Behaviour when the left side operand has negative value?
Why? Because number is a signed int, with 32bits of storage. (3<<31) is not representable in that type.
Once you're in undefined behavior territory, the compiler can do as it pleases.
(You can't rely on any of the following because it is UB - this is just an observation of what your compiler appears to be doing).
In this case it looks like the compiler is doing the right shift, resulting in 0x80000000 as a binary representation. This happens to be the two's complement representation of INT_MIN. So the second snippet is not surprising.
Why does the first one output the same thing? In two's complement, MIN_INT would be -2^31. But the max value is 2^31-1. MIN_INT * -1 would be 2^31 (if it was representable). And guess what representation that would have? 0x80000000. Back to where you started!