I was looking through C++ Integer Overflow and Promotion, tried to replicate it, and finally ended up with this:
#include <iostream>
#include <stdio.h>
using namespace std;
int main() {
int i = -15;
unsigned int j = 10;
cout << i+j << endl; // 4294967291
printf("%d\n", i+j); // -5 (!)
printf("%u\n", i+j); // 4294967291
return 0;
}
The cout does what I expected after reading the post mentioned above, as does the second printf: both print 4294967291. The first printf, however, prints -5. Now, my guess is that this is printf simply interpreting the unsigned value of 4294967291 as a signed value, ending up with -5 (which would fit seeing that the 2's complement of 4294967291 is 11...11011), but I'm not 100% convinced that I did not overlook anything. So, am I right or is something else happening here?
Yes, you got it right. That's why printf() is generally unsafe: it interprets its arguments strictly according to the format string, ignoring their actual type.
Related
So I am trying to do a code that will work as a cypher. It will take in the word to cypher as input and output (print) the coded word. The problematic snippet of my code is the for loop.
#include <stdio.h>
#include <ctype.h>
#include <string.h>
int main(void)
{
char input[] = "hello";
printf("hello\n");
printf("ciphertext: ");
for (int i = 0; i < 5; i++)
{
if(isalpha(input[i]))
{
int current = input[i];
int cypher = ((current + 1) % 26 )+current;
char out = (char)cypher;
printf("%c", out);
}
else
{
printf("%c", input[i]);
}
}
printf("\n");
}
The problem that I run into when debugging is that the value that ends up being stored in "out" seems correct, however whn it comes to printing it, it shows somehthing else entirely. I did look up quite a few things that I found on here , such as writing the code as such:
char out = (char)cypher;
char out= cypher + '0';
and so on but to no avail. The output should be ifmmp but rather i get j~rrx
Anything would help! thanks :)
You're getting the correct answer. 105 is the ASCII value 'i'. There is no difference. More precisely, the char type is defined as an integer. On virtually all compilers it is 8 bits in size. So an unsigned char can have a value between 0 and 255; a signed char can have a value between -128 and +127.
So when your out variable has the value 105, it has the value 'i'.
The output of your printf will be:
i
But if you look at the out variable in a debugger, you might see 105, depending on the debugger.
Please take a look at this simple program:
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> a;
std::cout << "vector size " << a.size() << std::endl;
int b = -1;
if (b < a.size())
std::cout << "Less";
else
std::cout << "Greater";
return 0;
}
I'm confused by the fact that it outputs "Greater" despite it's obvious that -1 is less than 0. I understand that size method returns unsigned value but comparison is still applied to -1 and 0. So what's going on? can anyone explain this?
Because the size of a vector is an unsigned integral type. You are comparing an unsigned type with a signed one, and the two's complement negative signed integer is being promoted to unsigned. That corresponds to a large unsigned value.
This code sample shows the same behaviour that you are seeing:
#include <iostream>
int main()
{
std::cout << std::boolalpha;
unsigned int a = 0;
int b = -1;
std::cout << (b < a) << "\n";
}
output:
false
The signature for vector::size() is:
size_type size() const noexcept;
size_type is an unsigned integral type. When comparing an unsigned and a signed integer, the signed one is promoted to unsigned. Here, -1 is negative so it rolls over, effectively yielding the maximal representable value of the size_type type. Hence it will compare as greater than zero.
-1 unsigned is a higher value than zero because the high bit is set to indicate that it's negative but unsigned comparison uses this bit to expand the range of representable numbers so it's no longer used as a sign bit. The comparison is done as (unsigned int)-1 < 0 which is false.
I was trying the Caesar Cipher problem and got stuck at a very beginner like looking bug, but I don't know why my code is behaving that way. I added an integer to a char and expect it to increase in value, but I get a negative number instead. Here is my code. Although I found a way around it, but why does this code behave this way?
#include <iostream>
using std::cout; using std::endl;
int main()
{
char ch ='w';
int temp;
temp = int(ch) + 9;
ch = temp;
cout<<temp<<endl;
cout<<(int)ch;
return 0;
}
Output:
128
-128
A signed char type can typically hold values from -128 to 127.
With a value 128 it overflows.
I have been learning C++ for about a month. And I made a binary converter. But when I input a large number,such as 200000000,it will return a wrong answer. So I changed all int into double and changed some codes,but it was still wrong.Then I searched for the solution on Google,but I got nothing. Please point out my mistake!Thanks! Below is the code.
(Sorry for my poor English and it's my first time asking a question on this website.If I made anything wrong,please point it out and forgive me.)
#include <iostream>
#include <cmath>
using namespace std;
int main(){
double m,n,t,x,i=0;
cin>>n;
m=n;
do{
if(fmod(n,2)==0) n/=2;
else n=(n-1)/2;
i++;
}while(n>=1);
for(t=0;t<i;t++){
x+=fmod(m,2)*pow(10,t);
if(fmod(m,2)==0) m/=2;
else m=(m-1)/2;
}
printf("%.f\n",x);
return 0;
}
You should change double to unsigned long long but it won't help because of the following code.
x+=fmod(m,2)*pow(10,t);
if(fmod(m,2)==0) m/=2;
else m=(m-1)/2;
This code converts m to decimal so that it can be printed. But since it shifts ones by powers of 10 it goes beyond the range of even unsigned long long.
It can be fixed by declaring x as a string. Change fmod to %.
And
x+=fmod(m,2)*pow(10,t);
to
x = ((m%2)? "1":"0") + x ;
and cout << x << endl; at the end. Good luck!
Though the two snippets below have a slight difference in the manipulation of the find variable, still the output seems to be the same. Why so?
First Snippet
#include<iostream>
using namespace std;
int main()
{
int number = 3,find;
find = number << 31;
find *= -1;
cout << find;
return 0;
}
Second Snippet
#include<iostream>
using namespace std;
int main()
{
int number = 3,find;
find = number << 31;
find *= 1;
cout << find;
return 0;
}
Output for both snippets:
-2147483648
(according to Ideone: 1, 2)
In both your samples, assuming 32bit ints, you're invoking undefined behavior as pointed out in Why does left shift operation invoke Undefined Behaviour when the left side operand has negative value?
Why? Because number is a signed int, with 32bits of storage. (3<<31) is not representable in that type.
Once you're in undefined behavior territory, the compiler can do as it pleases.
(You can't rely on any of the following because it is UB - this is just an observation of what your compiler appears to be doing).
In this case it looks like the compiler is doing the right shift, resulting in 0x80000000 as a binary representation. This happens to be the two's complement representation of INT_MIN. So the second snippet is not surprising.
Why does the first one output the same thing? In two's complement, MIN_INT would be -2^31. But the max value is 2^31-1. MIN_INT * -1 would be 2^31 (if it was representable). And guess what representation that would have? 0x80000000. Back to where you started!