So I am still pretty new to programming and trying to learn C++ so slowly figuring it out.
Right now I am trying to attempt the credit card number problem and trying to isolate each digit in the number so like.
using namespace std;
int main()
{
int creditcardnumber;
cout << "Enter a 16-digit credit card number: "; // asks for credit card number
cin >> creditcardnumber;
cin.ignore();
int d16 = creditcardnumber % 10;
cout << d16;
}
at lower numbers like : 123456
it returns 6 which is what I want
but at a higher number like : 12387128374
it returns 7
I started noticing that it keeps returning 7 every time at higher numbers can anyone explain this and how to resolve it?
that's because the biggest value of int (assuming an int size is 4 bytes) is 2147483647. your test exceeds it by far.
try to use bigger type, like long.
Here is a simple test program to illustrate your problem:
#include <iostream>
int main() {
int x;
std::cin >> x;
std::cout << x << std::endl;
return 0;
}
If the input stream is
123456789123
(for example), the output is
2147483647
What this shows is that if you try to enter a number that is larger than
the largest possible value of an int, you will end up with just
the largest int instead, which is 2147483647.
And every larger number likewise will give the same result.
And of course 2147483647%10 evaluates to 7.
But in the end, I think the most relevant point was already made in
a comment: there is almost surely no good reason for you to store
the credit card "number" in a numeric C++ type;
std::string would be more appropriate.
A 16-digit number requires about 54 bits to represent. int very probably isn't big enough. long may or may not be, depending on your implementation.
If you have a sufficiently modern C++ implementation, long long is guaranteed to be at least 64 bits wide, so you can probably use that.
As for why you're always getting 7, apparently an overflow in
cin >> creditcardnumber;
causes the maximum representable int value to be stored in creditcardnumber. On a typical modern system with 32-bit int, that value is 231-1, or 2147483647. (On a system with 16-bit int, it happens that the maximum value is 32767, and you'd also get 7 as the last digit on overflow.)
I'm not sure what the C++ standard says about the behavior of cin >> n on overflow. On overflow, cin >> n, where n is some type of integer, is defined to set n to the minimum or maximum value of its type. It also sets failbit, so you can detect the error. Reference: http://en.cppreference.com/w/cpp/io/basic_istream/operator_gtgt
Note that even if you use long long, which can hold a 16-digit decimal number, you can still get an overflow if the user enters an invalid number that exceeds 263-1. (If that behaves the same way, the last digit also happens to be 7; there seems to be some interesting mathematics at work here, but it's not relevant to your problem.)
You experience integer overflow. You can check if input succeeded this way
if ( cin >> creditcardnumber) {
// do your things
} else {
cout << "Please enter a valid integer" << endl;
cin.clear();
cin.ignore( numeric_limits<streamsize>::max(), '\n');
}
Related
I found a strange behavior of std::cin (using VS 17 under Win 10): when a large number is entered, the number that std::cin is reading is close but different.
Is there any kind of approximation done with large numbers ?
How to get the exact same large number than entered ?
double n(0);
cout << "Enter a number > 0 (0 to exit): ";
cin.clear(); // does not help !
cin >> n; // enter 2361235441021745907775
printf("Selected number %.0f \n", n); // 2361235441021746151424 is processed ?.
Output
Enter a number > 0 (0 to exit):
2361235441021745907775
Selected number 2361235441021746151424
You need to learn about number of significant digits. A double can hold very large values, but it will only handle so many digits. Do a search for C++ doubles significant digits and read any of the 400 web pages that talk about it.
If you need more digits than that, you need to use something other than double. If you know it's an integer, not floating point, you can use long long int which is at least 8 bytes and can hold 2^63-1. If you know it's a positive number, you can make it unsigned long long int and you get the range 0 to at least 18,446,744,073,709,551,615 (2^64-1).
If you need even more digits, you need to find a library that supports arbitrarily long integers. A google for c++ arbitrary precision integer will give you some guidance.
I have recently started learning C++ and have made this small program
#include <iostream> // for std::cout << / std::cin >> / std::endl; / '\n'
#include <cstdlib> // for EXIT_SUCCESS and EXIT_FAILURE
int input()
{
int imp_num{ 0 };
std::cin >> imp_num;
return imp_num;
}
void output(int result)
{
std::cout << "The dubble of that number is : " << result << '\n';
}
int main()
{
std::cout << "Enter a number: ";
int inp_num{ input() }; // asks the user to enter a number and saves it
int result{ inp_num * 2 }; // calculates the result
output(result); // outputs the result
system("pause"); //prevents the console from terminating
return 0;
}
The problem occurs when the number revived is 10 digits or more. At that point the program just a random number (usually -2) that will always remain the same no matter what I put, and only changes if I recompile the source code.
Enter a number: 23213231231231212312
The dubble of that number is : -2
Press any key to continue . . .
Enter a number: 12311111111111111111111111
The dubble of that number is : -2
Press any key to continue . . .
I recompile the source code
Enter a number: 1231212123133333333333321
The dubble of that number is : 3259
Press any key to continue . . .
Changing all the int to int64_t doesn't solve the problem but weirdly the same output of -2 is present here.
Enter a number: 1231212123133333333333321
The dubble of that number is : -2
Press any key to continue . . .
I don't understand why out off all the numbers -2 will appear if an integer overflow is happening. I thought the numbers should circle around.
The value 1231212123133333333333321 given by you is drastically larger than even uint64_t could hold (overflows). In my case, the maximum range of a uint64_t (occupies 8-bytes data) datatype is:
0 to +18446744073709551615
To know the limits in your platform, i.e. practically, take a little bit help of C++ library limits:
#include <limits>
std::cout << std::numeric_limits<uint64_t>::max() << std::endl;
Notice that it may vary on different computer architectures.
int, which is a signed (usually 32 bit) number can only hold numbers from -2147483648 to 2147483647. When you're trying to input such a big number, std::cin fails to do so, an instead sets maximum possible value.
Now, you're trying to multiply biggest possible value to hold by 2, it causes an overflow and UB, technically anything can happen in your code, but what seems to be happening with the compiler you're using is the following.
int is 2 complement, it means that the biggest value has the following bit representation.
01111111111111111111111111111111, or 0x7fffffff in hex
When you're trying to multiply by 2, you're doing a left shit by 1 bit and you get
11111111111111111111111111111110, or 0xfffffffe which represents -2.
Remember though that even if it happens with the compiler and flags/optimization you're using right now, using different compiler and flags can produce different results, as you're causing Undefined Behavior.
I need to write a program that converts binary numbers to decimal.
I'm very new to C++ so I've tried looking at other people's examples but they are way too advanced for me.. I thought I had a clever idea on how to do it but I'm not sure if my idea was way off or if I'm just missing something.
int main(void)
{
//variables
string binary;
int pow2 = binary.length() - 1;
int length = binary.length();
int index = 0;
int decimal = 0;
cout << "Enter a binary number ";
cin >> binary; cout << endl;
while(index < length)
{
if (binary.substr(index, 1) == "1")
{
decimal = decimal + pow(2, pow2);
}
index++;
pow2--;
}
cout << binary << " converted to decimal is " << decimal;
}
Your computer is a logical beast. Your computer executes your program, one line at a time. From start to finish. So, let's take a trip, together, with your computer, and see what it ends up doing, starting at the very beginning of your main:
string binary;
Your computer begins by creating a new std::string object. Which is, of course, empty. There's nothing in it.
int pow2 = binary.length() - 1;
This is the very second thing that your computer does.
And because we've just discovered that binary is empty, binary.length() is obviously 0, so this sets pow2 to -1. When you take 0, and subtract 1 from it, that's what you get.
int length = binary.length();
Since binary is still empty, its length() is still 0, and this simply creates a new variable called length, whose value is 0.
int index = 0;
int decimal = 0;
This creates a bunch more variables, and sets them to 0. That's the next thing your computer does.
cout << "Enter a binary number ";
cin >> binary; cout << endl;
Here you print some stuff, and read some stuff.
while(index < length)
Now, we get into the thick of things. So, let's see what your computer did, before you got to this point. It set index to 0, and length to 0 also. So, both of these variables are 0, and, therefore, this condition is false, 0 is not less than 0. So nothing inside the while loop ever executes. We can skip to the end of your program, after the while loop.
cout << binary << " converted to decimal is " << decimal;
And that's how you your computer always gives you the wrong result. Your actual question was:
sure if my idea was way off or if I'm just missing something.
Well, there also other problems with your idea, too. It was slightly off. For starters, nothing here really requires the use of the pow function. Using pow is like try to kill a fly with a hammer. What pow does is: it converts integer values to floating point, computes the natural logarithm of the first number, multiplies it by the second number, and then raises e to this power, and then your code (which never runs) finally converts the result from floating point to integer, rounding things off. Nothing of this sort is ever needed in order to simply convert binary to decimal. This never requires employing the services of natural logarithms. This is not what pow is for.
This task can be easily accomplished with just multiplication and addition. For example, if you already have the number 3, and your next digit is 7, you end up with 37 by multiplying 3 by 10 and then adding 7. You do the same exact thing with binary, base 2, with the only difference being that you multiply your number by 2, instead of 10.
But what you're really missing the most, is the Golden Rule Of Computer Programming:
A computer always does exactly what you tell it to do, instead of what you want it to do.
You need to tell your computer exactly what your computer needs to do. One step at a time. And in the right order. Telling your computer to compute the length of the string before it's even read from std::cin does not accomplish anything useful. It does not automatically recompute its length, after it's actually read. Therefore, if you need to compute the length of an entered string, you computer it after it's been read in, not before. And so on.
I have written a function that converts a decimal number to a binary number. I enter my decimal number as a long long int. It works fine with small numbers, but my task is to determine how the computer handles overflow so when I enter (2^63) - 1 the function outputs that the decimal value is 9223372036854775808 and in binary it is equal to -954437177. When I input 2^63 which is a value a 64 bit machine can't hold, I get warnings that the integer constant is so large that it is unsigned and that the decimal constant is unsigned only in ISO C90 and the output of the decimal value is negative 2^63 and binary number is 0. I'm using gcc as a compiler. Is that outcome correct?
The code is provided below:
#include <iostream>
#include<sstream>
using namespace std;
int main()
{
long long int answer;
long long dec;
string binNum;
stringstream ss;
cout<<"Enter the decimal to be converted:"<< endl;;
cin>>dec;
cout<<"The dec number is: "<<dec<<endl;
while(dec>0)
{
answer = dec%2;
dec=dec/2;
ss<<answer;
binNum=ss.str();
}
cout<<"The binary of the given number is: ";
for (int i=sizeof(binNum);i>=0;i--){
cout<<binNum[i];}
return 0;
}
First, “on a 64-bit computer” is meaningless: long long is guaranteed at least 64 bits regardless of computer. If could press a modern C++ compiler onto a Commodore 64 or a Sinclair ZX80, or for that matter a KIM-1, a long long would still be at least 64 bits. This is a machine-independent guarantee given by the C++ standard.
Secondly, specifying a too large value is not the same as “overflow”.
The only thing that makes this question a little bit interesting is that there is a difference. And that the standard treats these two cases differently. For the case of initialization of a signed integer with an integer value a conversion is performed if necessary, with implementation-defined effect if the value cannot be represented, …
C++11 §4.7/3:
“If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined”
while for the case of e.g. a multiplication that produces a value that cannot be represented by the argument type, the effect is undefined (e.g., might even crash) …
C++11 §5/4:
“If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.”
Regarding the code I I only discovered it after writing the above, but it does look like it will necessarily produce overflow (i.e. Undefined Behavior) for sufficiently large number. Put your digits in a vector or string. Note that you can also just use a bitset to display the binary digits.
Oh, the KIM-1. Not many are familiar with it, so here’s a photo:
It was, reportedly, very nice, in spite of the somewhat restricted keyboard.
This adaptation of your code produces the answer you need. Your code is apt to produce the answer with the bits in the wrong order. Exhaustive testing of decimal values 123, 1234567890, 12345678901234567 show it working OK (G++ 4.7.1 on Mac OS X 10.7.4).
#include <iostream>
#include<sstream>
using namespace std;
int main()
{
long long int answer;
long long dec;
string binNum;
cout<<"Enter the decimal to be converted:"<< endl;;
cin>>dec;
cout<<"The dec number is: "<<dec<<endl;
while(dec>0)
{
stringstream ss;
answer = dec%2;
dec=dec/2;
ss<<answer;
binNum.insert(0, ss.str());
// cout << "ss<<" << ss.str() << ">> bn<<" << binNum.c_str() << ">>" << endl;
}
cout<<"The binary of the given number is: " << binNum.c_str() << endl;
return 0;
}
Test runs:
$ ./bd
Enter the decimal to be converted:
123
The dec number is: 123
The binary of the given number is: 1111011
$ ./bd
Enter the decimal to be converted:
1234567890
The dec number is: 1234567890
The binary of the given number is: 1001001100101100000001011010010
$ ./bd
Enter the decimal to be converted:
12345678901234567
The dec number is: 12345678901234567
The binary of the given number is: 101011110111000101010001011101011010110100101110000111
$ bc
bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
obase=2
123
1111011
1234567890
1001001100101100000001011010010
12345678901234567
101011110111000101010001011101011010110100101110000111
$
When I compile this with the largest value possible for a 64 bit machine, nothing shows up for my binary value.
$ bc 1.06
Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
2^63-1
9223372036854775807
quit
$ ./bd
Enter the decimal to be converted:
9223372036854775807
The dec number is: 9223372036854775807
The binary of the given number is: 111111111111111111111111111111111111111111111111111111111111111
$
If you choose a larger value for the largest value that can be represented, all bets are off; you may get a 0 back from cin >> dec; and the code does not handle 0 properly.
Prelude
The original code in the question was:
#include <iostream>
using namespace std;
int main()
{
int rem,i=1,sum=0;
long long int dec = 9223372036854775808; // = 2^63 9223372036854775807 = 2^63-1
cout<<"The dec number is"<<dec<<endl;
while(dec>0)
{
rem=dec%2;
sum=sum + (i*rem);
dec=dec/2;
i=i*10;
}
cout<<"The binary of the given number is:"<<sum<<endl;
return 0;
}
I gave this analysis of the earlier code:
You are multiplying the plain int variable i by 10 for every bit position in the 64-bit number. Given that i is probably a 32-bit quantity, you are running into signed integer overflow, which is undefined behaviour. Even if i was a 128-bit quantity, it would not be big enough to handle all possible 64-bit numbers (such as 263-1) accurately.
I am a writing a lexer as part of a compiler project and I need to detect if an integer is larger than what can fit in a int so I can print an error. Is there a C++ standard library for big integers that could fit this purpose?
The Standard C library functions for converting number strings to integers are supposed to detect numbers which are out of range, and set errno to ERANGE to indicate the problem. See here
You could probably use libgmp. However, I think for your purpose, it's just unnecessary.
If you, for example, parse your numbers to 32-bit unsigned int, you
parse the first at most 9 decimal numbers (that's floor(32*log(2)/log(10)). If you haven't more, the number is OK.
take the next digit. If the number you got / 10 is not equal to the number from the previous step, the number is bad.
if you have more digits (eg. more than 9+1), the number is bad.
else the number is good.
Be sure to skip any leading zeros etc.
libgmp is a general solution, though maybe a bit heavyweight.
For a lighter-weight lexical analyzer, you could treat it as a string; trim leading zeros, then if it's longer than 10 digits, it's too long; if shorter then it's OK, if exactly 10 digits, string compare to the max values 2^31=2147483648 or 2^32=4294967296. Keep in mind that -2^31 is a legal value but 2^31 isn't. Also keep in mind the syntax for octal and hexadecimal constants.
To everyone suggesting atoi:
My atoi() implementation does not set errno.
My atoi() implementation does not return INT_MIN or INT_MAX on overflow.
We cannot rely on sign reversal. Consider 0x4000...0.
*2 and the negative bit is set.
*4 and the value is zero.
With base-10 numbers our next digit would multiply this by 10.
This is all nuts. Unless your lexer is parsing gigs of numerical data, stop the premature optimization already. It only leads to grief.
This approach may be inefficient, but it's adequate for your needs:
const char * p = "1234567890123";
int i = atoi( p );
ostringstream o;
o << i;
return o.str() == p;
Or, leveraging the stack:
const char * p = "1234567890123";
int i = atoi( p );
char buffer [ 12 ];
snprintf( buffer, 12, "%d", i );
return strcmp(buffer,p) == 0;
How about this. Use atol, and check for overflow and underflow.
#include <iostream>
#include <string>
using namespace std;
main()
{
string str;
cin >> str;
int i = atol(str.c_str());
if (i == INT_MIN && str != "-2147483648") {
cout << "Underflow" << endl;
} else if (i == INT_MAX && str != "2147483647") {
cout << "Overflow" << endl;
} else {
cout << "In range" << endl;
}
}
You might want to check out GMP if you want to be able to deal with such numbers.
In your lexer as you parse the integer string you must multiply by 10 before you add each new digit (assuming you're parsing from left to right). If that running total suddenly becomes negative, you've exceeded the precision of the integer.
If your language (like C) supports compile-time evaluation of expressions, then you might need to think about that, too.
Stuff like this:
#define N 2147483643 // This is 2^31-5, i.e. close to the limit.
int toobig = N + N;
GCC will catch this, saying "warning: integer overflow in expression", but of course no individual literal is overflowing. This might be more than you require, just thought I'd point it out as stuff that real compilers to in this department.
You can check to see if the number is higher or lower than INT_MAX or INT_MIN respectively. You would need to #include <limits.h>