Octal to binary conversion confusion - c++

I have a code in C++ which convert 2 digits octal number to binary number. For testing validity of the code I used several online conversion site like
this and
this
When I enter 58 or 59 in as an octal value it says invalid octal values but when I enter 58 in my code it gives binary number as - 101000. Again for testing I enter 101000 as binary number in above website's calculator then they gives me result 50 as octal value.
I need some explanation why this is so.
Here is the C++ code -
#include <iostream.h>
#include <conio.h>
void octobin(int);
void main()
{
clrscr();
int a;
cout << "Enter a 2-digit octal number : ";
cin>>a;
octobin(a);
getch();
}
void octobin(int oct)
{
long bnum=0;
int A[6];
//Each octal digit is converted into 3 bits, 2 octal digits = 6 bits.
int a1,a2,quo,rem;
a2=oct/10;
a1=oct-a2*10;
for(int x=0;x<6;x++)
{
A[x]=0;
}
//Storing the remainders of the one's octal digit in the array.
for (x=0;x<3;x++)
{
quo=a1/2;
rem=a1%2;
A[x]=rem;
a1=quo;
}
//Storing the remainders of the ten's octal digit in the array.
for(x=3;x<6;x++)
{
quo=a2/2;
rem=a2%2;
A[x]=rem;
a2=quo;
}
//Obtaining the binary number from the remainders.
for(x=x-1;x>=0;x--)
{
bnum*=10;
bnum+=A[x];
}
cout << "The binary number for the octal number " << oct << " is " << bnum << "." << endl;
}

Octal numbers have digits that are all in the range [0,7]. Thus, 58 and 59 are not octal numbers, and your method should be expected to give erroneous results.
The reason that 58 evaluates to 101000 is because the first digit of the octal number expands to the first three digits of the binary number. 5 = 101_2. Same story for the second part, but 8 = 1000_2, so you only get the 000 part.
An alternate explanation is that 8 = 0 (mod 8) (I am using the = sign for congruency here), so both 8 and 0 will evaluate to 000 in binary using your code.
The best solution would be to do some input validation. For example, while converting you could check to make sure the digit is in the range [0,7]

You cannot use 58 or 59 as an input value. It's octal, for Christ's sake.
Valid digits are from 0 to 7 inclusive.

If you're encoding a number in base 8, none of the octets can be 8 or greater. If you're going to do the code octet by octet, there needs to be a test to see whether the octet is 8 or 9, and to throw an error. Right now your code isn't checking this so the 8 and 9 are overflowing to 10.

58 and 59 aren't valid octal values indeed ... the maximum digit you can use is yourbase-1 :
decimal => base = 10 => Digits from 0 t 9
hexadécimal => base = 16 => Digits from 0 to 15 (well, 0 to F)
Octal => base = 8 => Digits from 0 to 7

Related

Reverse number -first digit 0 [duplicate]

When an integer is initialized as int a = 010, a is actually set to 8, but for int a = 10, a is set to 10.
Can anyone tell me why a is not set to 10 for int a = 010?
Because it's interpreting 010 as a number in octal format. And in a base-8 system, the number 10 is equal to the number 8 in base-10 (our standard counting system).
More generally, in the world of C++, prefixing an integer literal with 0 specifies an octal literal, so the compiler is behaving exactly as expected.
0 before the number means it's in octal notation. So since octal uses a base of 8, 010 would equal 8.
In the same way 0x is used for hexadecimal notation which uses the base of 16. So 0x10 would equal 16 in decimal.
In C, C++, Objective C and related languages a 0 prefix signifies an octal literal constant, so 010 = 8 in decimal.
Leading 0 in 010 means that this number is in octal form. So 010 means 8 in decimal.

How many decimal number possibilities are there for a 26-bit Wiegand number?

I have successfully emulated a 26 bit Wiegand signal using an ESP32. Basically, the program transforms a manually inputted decimal number into a proper 26 bit Wiegand binary number and then sends it on 2 wires following the protocol:
bool* wiegandArray = new bool[26];
void getWiegand(unsigned int dec) {
// transform dec number into binary number using single bit shift operation
// and store it in wiegandArray[]
for (int i = 24; i > 0; --i) {
wiegandArray[i] = dec & 1;
dec >>= 1;
}
// check for parity of the first 12 bits
bool even = 0;
for(int i = 1; i < 13; i++) {
even ^= wiegandArray[i];
}
// add 0 or 1 as first bit (leading parity bit - even) based on the number of 'ones' in the first 12 bits
wiegandArray[0] = even;
// check for parity of the last 12 bits
bool odd = 1;
for(int i = 13; i < 25; i++) {
odd ^= wiegandArray[i];
}
// add 0 or 1 as last bit (trailing parity bit - odd) based on the number of 'ones' in the last 12 bits
wiegandArray[25] = odd;
}
Using this online calculator I can generate appropriate decimal numbers for a 26 bit Wiegand number.
Now, the problem that I am facing is that the end-user will actually input a CARD ID. A Card ID is a decimal number that should always result in a 24 bit binary number: 8 bits of facility code and 16 bits of ID code. And upon this 24 bit number I apply the parity bits to get a 26 bit code.
For example:
CARD ID= 16336141 / 101000111000110100101101
Facility Code: 163 / 10100011
Card Number: 36141 / 1000110100101101
Resulting 26 Wiegand: 10718509 / 11010001110001101001011010
The issue is that I don't know how to tackle this issue.
How can I generate a 26 bit Wiegand from 0 ? That would be 0 00000000 0000000000000000 1.
The largest 24 bit number is 16777215. But 8 bits for site codes (0-255) and 16 bits for card numbers (0-65535) mean 255*65535 = 16711425.
What is the actual range ? Should I start generating 26 bit Wiegand binary numbers from 0 ?

c++, binary number calculations

I have question that asks how values such as c are computed in terms of binary numbers. Im researching it but now but figured id ask here if anyone has somewhere they can send me or explain how this works.
int main()
{
int a 10, int b = 12, int c, int d;
int c = a << 2; //output 40
}
Well, I'm not answering with C++ code, as the question is not really related to the language.
The integer ten is written 10 in base 10 as it's equal to 1 * 10^1 + 0 * 10^0.
Binary is base 2, so let's try to write ten as a sum of powers of 2.
10 = 8 + 2
That is 2^3 + 2^1.
Let's switch to binary (using only two digits : 0 and 1).
2^3 is written 1000
2^1 is written 10
Their sum is 1010 in binary.
"<<" is the operation that shift left binary digits by a certain amount (beware of overflow).
So 1010 << 2 is 101000
That is in decimal 2^5 + 2^3 = 32 + 8 = 40
You can also think of "<< N" as being a multiplication by 2^N of an integer.

C++ Encoding Numbers

I am currently working on sending data to a receiving party based on mod96 encoding scheme. Following is the request structure to be sent from my side:
Field Size Type
1. Message Type 2 "TT"
2. Firm 2 Mod-96
3. Identifier Id 1 Alpha String
4. Start Sequence 3 Mod-96
5. End Sequence 3 Mod-96
My doubt is that the sequence number can be greater than 3 bytes. Suppose I have to send numbers 123 and 123456 as start and end sequence numbers, how to encode it in mod 96 format . Have dropped the query to the receiving party, but they are yet to answer it. Can somebody please throw some light on how to go about encoding the numbers in mod 96 format.
Provided there's a lot of missing detail on what you really need, here's how works Mod-96 econding:
You just use printable characters as if they were digits of a number:
when you encode in base 10 you know that 123 is 10^2*1 + 10^1*2 + 10^0*3
(oh and note that you arbitrary choose that 1's value is really one: value('1') = 1
when you encode in base 96 you know that 123 is
96^2*value('1')+ 96^2*value('2')+96^0*value('3')
since '1' is the ASCII character #49 then value('1') = 49-32 = 17
Encoding 3 printable characters into a number
unsigned int encode(char a, char b, char c){
return (a-32)*96*96 + (b-32)*96 + (c-32);
}
Encoding 2 printable characters into a number
unsigned int encode(char a, char b){
return (b-32)*96 + (c-32);
}
Decoding a number into 2 printable characters
void decode( char* a, char*b, unsigned int k){
* b = k % 96 +32;
* a = k / 96 +32;
}
Decoding a number into 3 printable characters
void decode( char* a, char*b, char*c, unsigned int k){
* c = k % 96 +32;
k/= 96;
* b = k % 96 +32;
* a = k/96 +32;
}
You also of course need to check that characters are printable (between 32 and 127 included) and that numbers you are going to decode are less than 9216 (for 2 characters encoded) and 884736(for 3 characters encoded).
You know the final size would be 6 bytes:
Size 2 => max of 9215 => need 14 bits storage (values up to 16383 unused)
Size 3 => max of 884735 => need 17 bits storage (values up to 131071 unused)
Your packet needs 14+17+17 bits of memory (wich is 48 => exactly 6 bytes) bits storage just for Mod-96 stuff.
Observation:
Instead of 3 fields of sizes(2+3+3) we could have used one field of size(8) => we would finally use 47 bits ( but is still rounded up to 6 bytes)
If you still store each encoded number into a integer number of bytes you would use the same amount of memory (14 bits fits into 2 bytes, 17 bits fits into 3 bytes) used by storing chars directly.

Why are integers converted to octal numbers here? [duplicate]

This question already has answers here:
What does it mean when a numeric constant in C/C++ is prefixed with a 0?
(7 answers)
Closed 7 years ago.
I am unable to understand output of below mentioned program-
#include <stdio.h>
int main()
{
int i, a[8]={000, 001, 010, 011, 100, 101, 110, 111};
for(i=0;i<8;i++)
{
printf("%d\t",a[i]);
}
system("pause");
return 0;
}
OUTPUT -
0 1 8 9 100 101 110 111
Why are the initial four values getting converted here???
Any integer literal that starts with a 0 followed by other digits is octal, just like any integer literal starting with 0x or 0X, followed by digits, is hexadecimal. C++14 will add 0b or 0B as a prefix for binary integer literals.
See more on integer literals in C++ here.
If you start a number with a 0 it gets converted to an octal number
0xNumber is hex