What happens if you initialize a variable but start with 0 [duplicate] - c++

When an integer is initialized as int a = 010, a is actually set to 8, but for int a = 10, a is set to 10.
Can anyone tell me why a is not set to 10 for int a = 010?

Because it's interpreting 010 as a number in octal format. And in a base-8 system, the number 10 is equal to the number 8 in base-10 (our standard counting system).
More generally, in the world of C++, prefixing an integer literal with 0 specifies an octal literal, so the compiler is behaving exactly as expected.

0 before the number means it's in octal notation. So since octal uses a base of 8, 010 would equal 8.
In the same way 0x is used for hexadecimal notation which uses the base of 16. So 0x10 would equal 16 in decimal.

In C, C++, Objective C and related languages a 0 prefix signifies an octal literal constant, so 010 = 8 in decimal.

Leading 0 in 010 means that this number is in octal form. So 010 means 8 in decimal.

Related

What does "0b" and "0x" stand for when assigning binary and hex?

When assigning a binary value and a hexadecimal value directly you can do it as follows (respectively):
uint8_t val1 = 0b10101;
uint8_t val2 = 0xFF;
What does the 0b and 0x mean? Specifically the 0 at the front. Can you have other values instead of 0?
Also as another curious question, what other characters can go in the place of b and x? Is there one for octal as an example?
Any and all integer literals you can create are summarized in the C++ standard by the grammar production at [lex.icon]
integer-literal:
binary-literal integer-suffixopt
octal-literal integer-suffixopt
decimal-literal integer-suffixopt
hexadecimal-literal integer-suffixopt
binary-literal:
0b binary-digit
0B binary-digit
binary-literal 'opt binary-digit
octal-literal:
0
octal-literal 'opt octal-digit
decimal-literal:
nonzero-digit
decimal-literal 'opt digit
hexadecimal-literal:
hexadecimal-prefix hexadecimal-digit-sequence
binary-digit:
0
1
octal-digit: one of
0 1 2 3 4 5 6 7
nonzero-digit: one of
1 2 3 4 5 6 7 8 9
hexadecimal-prefix: one of
0x 0X
hexadecimal-digit-sequence:
hexadecimal-digit
hexadecimal-digit-sequence 'opt hexadecimal-digit
hexadecimal-digit: one of
0 1 2 3 4 5 6 7 8 9
a b c d e f
A B C D E F
As we can deduce from the grammar, there are four types of integer literals:
Plain decimal, that must begin with a non-zero digit.
Octal, any number with a leading 0 (including a plain 0).
Binary, requiring the prefix 0b or 0B.
Hexadecimal, requiring the prefix 0x or 0X.
The leading 0 for octal numbers can be thought of as the "O" in "Octal". The other prefixes use a leading zero to mark the beginning of a number that should not be interpreted as decimal. "B" is intuitively for "binary", while "X" is for "hexadecimal".
0b (or 0B) denotes a binary literal. C++ has allowed it since C++14. (It's not part of the C standard yet although some compilers allow it as an extension.) 0x (or 0X) is for hexadecimal.
0 can be used to denote an octal literal. (Interestingly 0 itself is an octal literal). Furthermore you use the escape sequence \ followed by digits to be read in octal: this applies only when defining const char[] literals using "" or char or multicharacter literals using ''. The '\0' notation that you often see to denote NUL when working with strings exploits that.
In the absence of a user defined literal suffix, any numeric literal starting with a non-zero is in denary.
There are rumblings in the C++ world to use 0o for an octal literal and perhaps even drop support for the leading zero version. Although that would be an hideous breaking change.
What does the 0b and 0x mean?
They mean that the nuneric literal is respectively in binary and hexadecimal base.
Can you have other values instead of 0?
A numeric literal starting with a non zero digit will be a decimal literal.
Also as another curious question, what other characters can go in the place of "b" and "x"?
Besides b and x, any octal digit can go there in which case it is the most significant digit of an octal literal.

how many bits I should put in stoi() function

the following code will make the program crash:
string test="b1";
unsigned __int8 t1 = stoi(test, 0, 8);
but 'b1'=177, should be ok for a 8 bits, right? , if I use
string test="b1";
unsigned __int8 t1 = stoi(test, 0, 16);
everything looks ok, why need to use 16 bits for 'b1'?
a more complicate situation is 16 bits will make it right, but 32 bits make it wrong!!!
string test="0800";
unsigned __int16 t1 = stoi(test, 0, 16);
std::stoi's third parameter has nothing to do with any number of bits. It's the base that the number is represented in.
2 means binary, 8 means octal, 10 means decimal, 16 means hexadecimal, etc. all the way up to base-36. 0 means to determine the base from the prefix: strings starting with "0x" or "0X" are interpreted as hexadecimal, strings starting with "0" are interpreted as octal, and all other strings are interpreted as decimal.
When you call std::stoi("b1", 0, 8), std::stoi will throw a std::invalid_argument exception since b is not a valid digit in base-8, and your program will crash if that exception goes uncaught.
std::stoi("0800", 0, 16) and std::stio("0800", 0, 32) are both totally valid, but of course 80016 and 80032 represent different numbers, so the two calls will return different results.
Base 8 has exactly 8 different digits. Valid digits are the following:
0
1
2
3
4
5
6
7
Notice that b is not a valid digit in base 8. Only bases greater or equal to 12 have the digit b.
if I use
unsigned __int8 t1 = stoi(test, 0, 16);
everything looks ok
16 is greater or equal to 12. b is a valid digit in base 16.

Difference between converting int to char by (char) and by ASCII

I have an example:
int var = 5;
char ch = (char)var;
char ch2 = var+48;
cout << ch << endl;
cout << ch2 << endl;
I had some other code. (char) returned wrong answer, but +48 didn't. When I changed ONLY (char) to +48, then my code got corrected.
What is the difference between converting int to char by using (char) and +48 (ASCII) in C++?
char ch=(char)var; has the same effect as char ch=var; and assigns the numeric value 5 to ch. You're using ASCII (supported by all modern systems) and ASCII character code 5 represents Enquiry 'ENQ' an old terminal control code. Perhaps some old timer has a clue what it did!
char ch2 = var+48; assigns the numeric value 53 to ch2 which happens to represent the ASCII character for the digit '5'. ASCII 48 is zero (0) and the digits all appear in the ASCII table in order after that. So 48+5 lands on 53 (which represents the character '5').
In C++ char is a integer type. The value is interpreted as representing an ASCII character but it should be thought of as holding a number.
Its numeric range is either [-128,127] or [0,255]. That's because C++ requires sizeof(char)==1 and all modern platforms have 8 bit bytes.
NB: C++ doesn't actually mandate ASCII, but again that will be the case on all modern platforms.
PS: I think its an unfortunate artifact of C (inherited by C++) that sizeof(char)==1 and there isn't a separate fundamental type called byte.
A char is simply the base integral denomination in c++. Output statements, like cout and printf map char integers to the corresponding character mapping. On Windows computers this is typically ASCII.
Note that the 5th in ASCII maps to the Enquiry character which has no printable character, while the 53rd character maps to the printable character 5.
A generally accepted hack to store a number 0-9 in a char is to do: const char ch = var + '0' It's important to note the shortcomings here:
If your code is running on some non-ASCII character mapping then characters 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 may not be laid out in order in which case this wouldn't work
If var is outside the 0 - 9 range this var + '0' will map to something other than a numeric character mapping
A guaranteed way to get the most significant digit of a number independent of 1 or 2 is to use:
const auto ch = to_string(var).front()
Generally char represents a number as int does. Casting an int value to char doesn't provide it's ASCII representation.
The ASCII codes as numbers for digits range from 48 (== '0') to 58 (== '9'). So to get the printable digit you have to add '0' (or 48).
The difference is that casting to char (char) explicitly converts the digit to a char and adding 48 do not.
Its important to note that an int is typically 32 bit and char is typically 8 bit. This means that the number you can store in a char is from -127 to +127(or 0 to 255-(2^8-1) if you use unsigned char) and in an int from −2,147,483,648 (−231) to 2,147,483,647 (231 − 1)(or 0 to 2^32 -1 for unsigned).
Adding 48 to a value is not changing the type to char.

C++ int with preceding 0 changes entire value

I have this very strange problem where if I declare an int like so
int time = 0110;
and then display it to the console the value returned is 72. However when I remove the 0 at the front so that int time = 110; the console then displays 110 like expected.
Two things I'd like to know, first of all why it does this with a preceding 0 at the start of the int and is there a way to stop it so that 0110 at least equals 110?Secondly is there any way to keep it so that 0110 returns 0110?
If you take a crack guess at the variable name I'm trying to do operations with 24hr time, but at this point any time before 1000 is causing problems because of this.
Thanks in advance!
An integer literal that starts from 0 defines an octal integer literal. Now in C++ there are four categories of integer literals
integer-literal:
decimal-literal integer-suffixopt
octal-literal integer-suffixopt
hexadecimal-literal integer-suffixopt
binary-literal integer-suffixopt
And octal-integer literal is defined the following way
octal-literal:
0 octal-literal
opt octal-digit
That is it starts from 0.
Thus this octal integer literal
0110
corresponds to the following decimal number
8^2 + 8^1
that is equal to 72.
You can be sure that 72 in octal representation is equivalent to 110 by running the following simple program
#include <iostream>
#include <iomanip>
int main()
{
std::cout << std::oct << 72 << std::endl;
return 0;
}
The output is
110
It is because of Integer Literals. Placing a 0 before number means its a octal number. For binary it is 0b, for hexadecimal it is 0x or 0X. You don't need to write any thing for decimal. See the code bellow.
#include<stdio.h>
int main()
{
int binary = 0b10;
int octal=010;
int decimal = 10;
int hexa = 0x10;
printf("%d %d %d %d\n", octal, decimal, hexa, binary);
}
For more information visit tutorialspoint.
The compiler is interpreting the leading zero as an octal number. The octal value of "110" is 72 in decimal. There's no need for the leading zero if you're just storing an int value.
You're trying to store "time" as it appears on a clock. That's actually more complicated than a simple int. You could store the number of minutes since midnight instead.
Zero at the start means the number is in octal. Without it is decimal.

Why is initializing an integer in C++ to 010 different from initializing it to 10?

When an integer is initialized as int a = 010, a is actually set to 8, but for int a = 10, a is set to 10.
Can anyone tell me why a is not set to 10 for int a = 010?
Because it's interpreting 010 as a number in octal format. And in a base-8 system, the number 10 is equal to the number 8 in base-10 (our standard counting system).
More generally, in the world of C++, prefixing an integer literal with 0 specifies an octal literal, so the compiler is behaving exactly as expected.
0 before the number means it's in octal notation. So since octal uses a base of 8, 010 would equal 8.
In the same way 0x is used for hexadecimal notation which uses the base of 16. So 0x10 would equal 16 in decimal.
In C, C++, Objective C and related languages a 0 prefix signifies an octal literal constant, so 010 = 8 in decimal.
Leading 0 in 010 means that this number is in octal form. So 010 means 8 in decimal.