Bit representation of characters C++ - c++

Here's my program for bit representation of characters. But I don't know does it show me right or wrong representation? There are suspicious units (red colored).
Can you explain me what's this (if it's right) or what's wrong with my code if these units should not be. Thanks
#include "stdafx.h"
#include "iostream"
using namespace std;
struct byte {
unsigned int a:1;
unsigned int b:1;
unsigned int c:1;
unsigned int d:1;
unsigned int e:1;
unsigned int f:1;
unsigned int g:1;
unsigned int h:1;
};
union SYMBOL {
char letter;
struct byte bitfields;
};
int main() {
union SYMBOL ch;
cout << "Enter your char: ";
while(true) {
ch.letter = getchar();
if(ch.letter == '\n') break;
cout << "You typed: " << ch.letter << endl;
cout << "Bite form = ";
cout << ch.bitfields.h;
cout << ch.bitfields.g;
cout << ch.bitfields.f;
cout << ch.bitfields.e;
cout << ch.bitfields.d;
cout << ch.bitfields.c;
cout << ch.bitfields.b;
cout << ch.bitfields.a;
cout << endl << endl;
}
}

See the ASCII table to understand the output you're getting:
a has the decimal value of 97, and 97 is 01100001 in binary
b has the decimal value of 98, and 97 is 01100010 in binary
and so on.

Bit fields are not portable. The biggest problem is that you don't know in which order the bits will be assigned to the individual bit fields, but you don't even know actually whether the struct will have 1, 2 or any other number of bytes.
I'd recommend using unsigned char (because you don't know whether char is signed or unsigned), and using code like (ch & 0x80) != 0, (ch & 0x40) != 0 etc.

Related

Code adds an unnecessary "3" between 0x and the number when multiplying and converting it to hex

I have written a tool for implementing cryptography. Its purpose is to take a string of numbers and a multiplicator, multiply the numbers and print them in hexadecimal representation with a leading 0x and a trailing ",":
As mentioned, the actual number has a leading 3 after the 0x. I suppose that there is a software bug—maybe it comes from converting ASCII characters?
I compile it with cl.exe hex_mult.cpp /EHsc /favor:INTEL64 /Ox if that helps.
I have already rewritten my code several times, but I don't really know what I could try here.
#include <iostream>
#include "hexconvert.hpp"
#include <string>
using namespace std;
int main(int argc, char** argv) {
string str;
long int mult = 1;
cout << "Please enter a string of numbers:" << endl;
getline(cin, str);
cout << "Please enter the multiplicator:" << endl;
cin >> mult;
cout << "The numbers multiplied by " << mult << ", represented as hex values:" << endl;
for(int C = 0; C<str.length(); C++) {
unsigned long long int x = str[C]*mult;
cout << int_hex(x) << ", ";
}
cout << endl;
system("pause>nul");
return 0;
}
hexconvert.hpp:
#include "sstream"
#include "iostream"
#include "string"
std::string int_hex(unsigned long long int dec) {
std::stringstream stream; //New string stream
stream << std::hex << dec; //Convert to hex before adding to stream
std::string hexstr(stream.str()); //Convert back to string
for(size_t counter = 0; counter <= hexstr.length(); counter++) {
//Convert all lowercase to uppercase
if(hexstr[counter]>=97 && hexstr[counter]<=122) { //Only
process if the value has the id of a character
hexstr[counter] = toupper(hexstr[counter]);
}
}
return hexstr;
}
I expect 123, 2 to output 0x1, 0x2, 0x3, but the actual output is 0x31, 0x32, 0x33.

stringstream: dec int to hex to car conversion issue

I was trying some simple exercises with stringstream, when I came across this problem. Following program takes an int number, saves it in stringstream in hex format and then displays whether a int in decimal and char are available in string.
I ran it for different inputs and it is not working properly for some of them.
Please see the detail below code:
#include <iostream>
#include <fstream>
#include <sstream>
using namespace std;
int main() {
int roll;
stringstream str_stream;
cout << "enter an integer\n";
cin>>roll;
str_stream << hex << roll;
if(str_stream>>dec>>roll){
cout << "value of int is " << roll << "\n";
}
else
cout << "int not fount \n";
char y;
if(str_stream>>y){
cout << "value of char is "<< y << endl;
}
else
cout << "char not found \n";
cout << str_stream.str() << "\n";
}
I ran it for 3 different inputs:
Case1:
{
enter an integer
9
value of int is 9
char not found
9
Case2:
enter an integer
31
value of int is 1
value of char is f
1f
Case3:
enter an integer
12
int not fount
char not found
c
In case 1 &2. program is working as expected but in case 3, it should found a char, I am not sure why it is not able to find char in stream.
Regards,
Navnish
If if(str_stream>>dec>>roll) fails to read anything then the state of the stream is set to fail(false). After that any further read operation using that stream will not be successful(and returns false) unless you reset the state of the stream using clear().
So:
.....//other code
if(str_stream>>dec>>roll){
cout << "value of int is " << roll << "\n";
}
else
{
cout << "int not fount \n";
str_stream.clear();//*******clears the state of the stream,after reading failed*********
}
char y;
if(str_stream>>y){
cout << "value of char is "<< y << endl;
}
else
cout << "char not found \n";
....//other code

c++ - why the result is minus?

I have some problem with my simple program where if my input more than 295600127, the result is minus (-).
here :
#‎include‬ <iostream>
#include <windows.h>
using namespace std;
int konarray(int b);
void konbil(int A[], int &n);
int kali(int x);
main(){
int b;
char *awal,akhir,pil;
awal:
system("COLOR 9F");
cout<<"enter decimal\t= ";cin>>b;
//calling function of conversion to an array of integers
konarray(b);
akhir:
cout<<"\n\nDo You Want To Reply Now ? (Y/N) : ";
cin >> pil;
cout<<endl;
switch(pil){
case 'Y' :
case 'y' :
system ("cls");
goto awal;
break;
case'N':
case 'n' :
break;
default:
system("COLOR c0");
system ("cls");
cout << "Please, make sure for entering the choise!!\n\n\n\n";
goto akhir;
}
}
//convertion numer to array
int konarray(int b){
int A[30];
int i,n,h,s;
i=0;
do{
h=b/8;
s=b%8;
b=h;
A[i]=s;
i++;
}
while(h!=0);
n=i;
for(i=0;i<n;i++)
cout<<A[i]<<" ";
konbil(A,n);
}
//array to octal
void konbil(int A[],int &n){
int c,i;
c=A[0];
for(i=1;i<n;i++)
c=c+A[i]*kali(i);
system("COLOR f0");
cout<<endl<<"the results of the conversion are\t= ";
cout<<c<<endl;
}
int kali(int x){
if (x==1)
return (10);
else
return(10*kali(x-1));
}
i have tried change of all int into long, but it was same.
I want to know some reason why?
and how to fix it?
295600128 decimal is octal is 2147500000. When you try to then put 2147500000 as a decimal number into an int, it overflows the 4 byte signed limit, which gives you the negative value.
One question - why do you want to store an octal number back in a variable as a decimal number? If you just want to display the number, you already have it in A.
If you just want to display a number as octal, std::ostream can already do this:
std::cout << std::oct << b << '\n';
If for some reason you really do need a decimal representation of an octal number in an integer variable, you will need to change c and kali to long long.
Have you tried
long long int
This program
int main(int argc, char** argv)
{
cout << sizeof(int) << endl;
cout << sizeof(long int) << endl;
cout << sizeof(long long int) << endl;
return 0;
}
gives
4
4
8
showing you need long long int to get 64 bits
Change like this:
void konbil(int A[],int &n){
unsigned long long c,i; // unsigned long long
c=A[0];
for(i=1;i<n;i++)
c=c+A[i]*kali(i);
system("COLOR f0");
cout<<endl<<"the results of the conversion are\t= ";
cout<<c<<endl;
}
The largest positive number you can store int an int is 2147483647 (2^31 - 1).
Adding just 1 to that number will result in the value -2147483648 (- 2^31).
So the answer is that you have an overflow while using int.
Therefore you need long long int or even better unsigned long long.
unsigned long long can't be negative and allows the maximum value (2^64 - 1).
EDIT:
An extra question was added in the comment - therefore this edit.
int
On most systems an int is 32 bits.
It can take values from -2^31 to 2^31-1, i.e. from -2147483648 to 2147483647
unsigned int
An unsigned int is also 32 bits. However an unsigned int can not be negative.
Instead it has the range 0 to 2^32-1, i.e. from 0 to 4294967295
long long int
When you use long long int you have 64 bits.
So the valid range is -2^63 to 2^63-1, i.e. -9223372036854775808 to 9223372036854775807
unsigned long long
A unsigned long long is also 64 bits but can not be negative.
The range is 0 to 2^64-1, i.e. 0 to 18446744073709551615
Try this code:
int main()
{
cout << endl << "Testing int:" << endl;
int x = 2147483647; // max positive value
cout << x << endl;
x = x + 1; // OVERFLOW
cout << x << endl;
cout << endl << "Testing unsigned int:" << endl;
unsigned int y = 4294967295; // max positive value
cout << y << endl;
y = y + 1; // OVERFLOW
cout << y << endl;
cout << endl << "Testing long long int:" << endl;
long long int xl = 9223372036854775807LL; // max positive value
cout << xl << endl;
xl = xl + 1; // OVERFLOW
cout << xl << endl;
cout << endl << "Testing unsigned long long:" << endl;
unsigned long long yl = 18446744073709551615ULL; // max positive value
cout << yl << endl;
yl = yl + 1; // OVERFLOW
cout << yl << endl;
return 0;
}
it will give you
Testing int:
2147483647
-2147483648
Testing unsigned int:
4294967295
0
Testing long long int:
9223372036854775807
-9223372036854775808
Testing unsigned long long:
18446744073709551615
0
showing how you overflow from max positive value by adding just 1.
Also see this link http://www.cplusplus.com/reference/climits/

Why are hash values inconsistent between different program executions?

As part of a research project, I was testing some hash functions that I found on Eternally Confuzzled here. The project has to do with page caching algorithms and the hash behavior itself never seemed important until now, but this is still more for my own curiosity. To test, I'm using the following code:
#include <iostream>
#include <cstdlib>
#include <string>
using namespace std;
unsigned oat_hash(void *key, int len);
int main()
{
string name;
cout << "Enter a name: ";
getline(cin, name);
cout << "Hash: " << oat_hash(&name, sizeof(string)) << endl << endl;
cout << "Enter the name again: ";
getline(cin, name);
cout << "Hash: " << oat_hash(&name, sizeof(string)) << endl << endl;
return 0;
}
unsigned oat_hash(void *key, int len)
{
unsigned char *p = (unsigned char*) key;
unsigned h = 0;
for (int i = 0; i < len; i++) {
h += p[i];
h += (h << 10);
h ^= (h >> 6);
}
h += (h << 3);
h ^= (h >> 11);
h += (h << 15);
return h;
}
Program execution 1 output:
Enter a name: John Doe
Hash: 4120494494
Enter the name again: John Doe
Hash: 4120494494
Program execution 2 output:
Enter a name: John Doe
Hash: 3085275063
Enter the name again: John Doe
Hash: 3085275063
I entered the same string and got the same hash value during the same program execution, but why would the values be different for different program executions? Wouldn't different hash values indicate different data?
Implementations of std::string contain a pointer. You are hashing the internals of the std::string and not the actual text of the std::string. On modern systems, the stack location is randomized and also the freestore allocations are randomized, resulting into different internals of std::string each time you run it.
You probably might want to change the code like this:
unsigned oat_hash(void const *key, int len)
{
unsigned char const *p = static_cast<unsigned char const *>(key);
// etc.
}
//...
cout << "Hash: " << oat_hash(name.c_str(), name.size()) << endl << endl;

C++ Unions bit fields task

Can somebody clear me out why would I use union and with what purpose the same address for the cin'ed variable and bit field (task from the Schildts C++ book)? In other words why would I use union for :
char ch;
struct byte bit;
// Display the ASCII code in binary for characters.
#include <iostream>
#include <conio.h>
using namespace std;
// a bit field that will be decoded
struct byte {
unsigned a : 1;
unsigned b : 1;
unsigned c : 1;
unsigned d : 1;
unsigned e : 1;
unsigned f : 1;
unsigned g : 1;
unsigned h : 1;
};
union bits {
char ch;
struct byte bit;
} ascii ;
void disp_bits(bits b);
int main()
{
do {
cin >> ascii.ch;
cout << ": ";
disp_bits(ascii);
} while(ascii.ch!='q'); // quit if q typed
return 0;
}
// Display the bit pattern for each character.
void disp_bits(bits b)
{
if(b.bit.h) cout << "1 ";
else cout << "0 ";
if(b.bit.g) cout << "1 ";
else cout << "0 ";
if(b.bit.f) cout << "1 ";
else cout << "0 ";
if(b.bit.e) cout << "1 ";
else cout << "0 ";
if(b.bit.d) cout << "1 ";
else cout << "0 ";
if(b.bit.c) cout << "1 ";
else cout << "0 ";
if(b.bit.b) cout << "1 ";
else cout << "0 ";
if(b.bit.a) cout << "1 ";
else cout << "0 ";
cout << "\n";
}
As a union, both ch and bit have an overlapped (shared) memory location. Store a character in it as ch and then reading bit produces the corresponding bit values for the character.
The real answer is - you wouldn't. Using bitfields in unions (or at all) like this is inherently unportable and may be undefined. If you need to fiddle with bits, you are much better off using the C++ bitwise operators.
Because the exercise demonstrates breaking up a value into bits using a bitfield and a union.
Assuming you know what a union is, if you were extracting something less repetitive from a binary value then you might want use it for clarity instead of making say two 24 bit integers from 48 chars out of shifts and masks.
But for the example in the task, shifts and masks would be much cleaner code, so you would probably not use a union for this task.
void disp_bits(unsigned b)
{ // not tested
for ( int shift = 7; shift >= 0; --shift )
cout << ( ( b >> shift ) & 1 ) << ' ';
cout << "\n";
}
Unions are used in network protocols. They can also be handy to fake out polymorphism in C. Generally they are a special use case.
In this example, it is sort of a dummy example to show you a little code.