#include <iostream>
int main()
{
using namespace std;
int number, result;
cout << "Enter a number: ";
cin >> number;
result = number << 1;
cout << "Result after bitshifting: " << result << endl;
}
If the user inputs 12, the program outputs 24.
In a binary representation, 12 is 0b1100. However, the result the program prints is 24 in decimal, not 8 (0b1000).
Why does this happen? How may I get the result I except?
Why does the program output 24?
You are right, 12 is 0b1100 in its binary representation. That being said, it also is 0b001100 if you want. In this case, bitshifting to the left gives you 0b011000, which is 24. The program produces the excepted result.
Where does this stop?
You are using an int variable. Its size is typically 4 bytes (32 bits) when targeting 32-bit. However, it is a bad idea to rely on int's size. Use stdint.h when you need specific sizes variables.
A word of warning for bitshifting over signed types
Using the << bitshift operator over negative values is undefined behavior. >>'s behaviour over negative values is implementation-defined. In your case, I would recommend you to use an unsigned int (or just unsigned which is the same), because int is signed.
How to get the result you except?
If you know the size (in bits) of the number the user inputs, you can use a bitmask using the & (bitwise AND) operator. e.g.
result = (number << 1) & 0b1111; // 0xF would also do the same
Related
Sorry for my bad English. I need to build app which converts hex to rgb. I have file U1.txt with content inside:
2 3
008000
FF0000
FFFFFF
FFFF00
FF0000
FFFF00
And my codeblocks app:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int a;
int b;
string color;
ifstream data("U1.txt");
ofstream result("U1result.txt");
data >> a;
data >> b;
for (int i = 0; i < a * b; i++) {
data >> color;
cout << color[0] * 16 + color[1] << endl;
}
data.close();
result.close();
return 0;
}
This gives me 816. But it should be 0. I think color[0] is not an integer, but a char and it multiplies by ASCII number.. I've tried many ways with atoi, c_str() and it not working. P.S do not suggest stoi(), because I need to do this homework with older C++. Thanks in advance and have a good day ;)
You can directly store the hexadecimal values in an int with std::hex.
int b;
ifstream data("U1.txt");
data >> std::hex >> b;
Since those encodings use 24 bits, you have to start out with an integer type that holds at least 24 bits. And for this kind of packing and unpacking, it really ought to be unsigned, so you don't get tangled up in sign bits. That means using std::uint_least32_t, which is the smallest unsigned type that can hold at least 32 bits. (Yes, 24 would fit better, but there is no least24 type; 32 is the best you can do).
If your compiler doesn't provide those fixed-width types (std::uint_least32_t), you can use unsigned long. It's required to be at least 32 bits wide. It could be larger, and the reason for using std::uint_least32_t is that your compiler might have, for example, a 32-bit integer, in which case unsigned int would be 32 bits wide. But you can't count on that, so either use the fixed-width type or use unsigned long to ensure that you have enough bits.
Since the character inputs are encoded in hexadecimal, you need to tell the input system to interpret them as hex values. So:
std::uint_least32_t value;
data >> std::hex >> value;
Now you've got the value in the low 24 bits of value. You need to pick out the individual R, G, and B parts of that value. That's straightforward. To get the low 8 bits, just mask out the higher ones:
std::cout << (value & 0xFF) << '\n';
To get the next 8 bits, shift and mask:
std::cout << ((value >> 8) & 0xFF) << '\n';
And, naturally, to get the upper 8 bits, shift and mask:
std::cout << ((value >> 16) & 0xFF) << '\n';
A rather unelegant but also working answer is to subtract all your chars by 48 as thats where numbers start in ASCII. This is also the reason why you get 816 as:
48*16+48 = 816
I have the following exercise:
Implement a function void float to bits(float x) which prints the bit
representation of x. Hint: Casting a float to an int truncates the
fractional part, but no information is lost casting a float pointer to
an int pointer.
Now, I know that a float is represented by a sign-bit, some bits for its mantissa, some bits for the basis and some bits for the exponent. It depends on my system how many bits are used.
The problem we are facing here is that our number basically has two parts. Let's consider 8.7 the bit representation of this number would be (to my understanding) the following: 1000.0111
Now, float's are stored wit a leading zero, so 8.8 would become 0.88*10^1
So I somehow have to get all the information out of my memory. I don't really see how I should do that. What should that hint hint me to? What's the difference between a integer pointer and a float pointer?
Currently I have this:
void float_to_bits() {
float a = 4.2345678f;
int* b;
b = (int*)(&a);
*b = a;
std::cout << *(b) << "\n";
}
But I really don't get the bigger picture behind the hint here. How do I get the mantissa, the exponent, the sign and the basis? I also tried playing around with the bit-wise operators >>, <<. But I just don't see how this should help me here, since they won't change the pointers position. It's useful to get e.g. the bit representation of an integer but that's about it, no idea what use it'd be here.
The hint your teacher gave is misleading: casting pointer between different types is at best implementation defined. However, memcpy(...)ing an object to a suutably sized array if unsigned char is defined. The content if the resulting array can then be decomposed into bits. Here is a quick hack to represent the bits using hexadecimal values:
#include <iostream>
#include <iomanip>
#include <cstring>
int main() {
float f = 8.7;
unsigned char bytes[sizeof(float)];
std::memcpy(bytes, &f, sizeof(float));
std::cout << std::hex << std::setfill(‘0’);
for (int b: bytes) {
std::cout << std::setw(2) << b;
}
std::cout << ‘\n’;
}
Note that IEEE 754 binary floating points do not store the full significand (the standard doesn’t use mantissa as a term) except for denormalized values: the 32 bit floats store
1 bit for the sign
8 bits for the exponent
23 bits for the normalized significand with the non-zero high bit being implied
The hint directs you how to pass the Float into an Integer without passing through value conversion.
When you assign floating-point value to an integer, the processor removes the fraction part. int i = (int) 4.502f; will result in i=4;
but when you make a int pointer (int*) point to a float's location,
no conversion is made, also when you read the int* value.
to show the representation, i like seeing HEX numbers,
thats why my first example was given in HEX
(each Hexa-decimal digit represents 4 binary digits).
but it is also possible to print as binary,
and there are many ways (I like this one best!)
Follows an annotated example code:
Also available # Culio
#include <iostream>
#include <bitset>
using namespace std;
int main()
{
float a = 4.2345678f; // allocate space for a float. Call it 'a' and put the floating point value of `4.2345678f` in it.
unsigned int* b; // allocate a space for a pointer (address), call the space b, (hint to compiler, this will point to integer number)
b = (unsigned int*)(&a); // GREAT, exactly what you needed! take the float 'a', get it's address '&'.
// by default, it is an address pointing at float (float*) , so you correctly cast it to (int*).
// Bottom line: Set 'b' to the address of a, but treat this address of an int!
// The Hint implied that this wont cause type conversion:
// int someInt = a; // would cause `someInt = 4` same is your line below:
// *b = a; // <<<< this was your error.
// 1st thing, it aint required, as 'b' already pointing to `a` address, hence has it's value.
// 2nd by this, you set the value pointed by `b` to 'a' (including conversion to int = 4);
// the value in 'a' actually changes too by this instruction.
cout << a << " in binary " << bitset<32>(*b) << endl;
cout << "Sign " << bitset<1>(*b >> 31) << endl; // 1 bit (31)
cout << "Exp " << bitset<8>(*b >> 23) << endl; // 8 bits (23-30)
cout << "Mantisa " << bitset<23>(*b) << endl; // 23 bits (0-22)
}
I just learned some simple encryption today and wrote a simple program to convert my text to 10-bit binary. Im not sure if i'm doing it correctly, but the commented section of the code and the actual code has 2 different 10-bit outputs. I am confused. Can someone explain it to me in layman terms?
#include <iostream>
#include <string>
#include <bitset>
#include "md5.h"
using namespace std;
using std::cout;
using std::endl;
int main(int argc, char *argv[])
{
string input ="";
cout << "Please enter a string:\n>";
getline(cin, input);
cout << "You entered: " << input << endl;
cout << "md5 of " << input << ": " << md5("input") << endl;
cout << "Binary is: ";
// cout << bitset<10>(input[1]);
for (int i=0; i<5; i++)
cout << bitset<2>(input[i]);
cout << endl;
return 0;
}
tl;dr : A char is 8 bits, and the string operator[] returns the different chars, as such you accessed different chars and took the first two bits of those. The solution comes in treating a char as exactly that: 8 bits. By doing some clever bit manipulation, we can achieve the desired effect.
The problem
While I still have not completely understood, what you were trying to do, I can answer what a problem could be with this code:
By calling
cout<<bitset<10>(input[1]);
you are reading the 10 bits starting from the second character ( input[0] would start from the first character).
Now, the loop does something entirely different:
for (int i=0; i<5; i++)
cout << bitset<2>(input[i]);
It uses the i-th character of the string and constructs a bitset from it.
The reference of the bitset constructor tells us this means the char is converted to an unsigned long long, which is then converted to a bitset.
Okay, so let's see how that works with a simple input string like
std::string input = "aaaaa";
The first character of this string is 'a', which gives you the 8 bits of '01100001' (ASCII table), and thus the 10 bit bitset that is constructed from that turns out to print
0001100001
where we see a clear padding for the bits to the left (more significant).
On the other hand, if you go through the characters with your loop, you access each character and take only 2 of the bits.
In our case of the character 'a'='01100001', these bits are '01'. So then your program would output 01 five times.
Now, the way to fix it is to actually think more about the bits you are actually accessing.
A possible solution
Do you want to get the first ten bits of the character string in any case?
In that case, you'd want to write something like:
std::bitset<10>(input[0]);
//Will pad the first two bits of the bitset as '0'
or
for(int i=0;i<5;++i){
char referenced = input[i/4];
std::bitset<2>((referenced>>(6-(i%4)*2)));
}
The loop code was redesigned to read the whole string sequentially into 2 bit bitsets.
So since in a char there are 8 bits, we can read 4 of those sets out of a single char -> that is the reason for the "referenced".
The bitshift in the lower part of the loop makes it so it starts with a shift of 6, then 4, then 2, then 0, and then resets to 6 for the next char, etc...
(That way, we can extract the 2 relevant bits out of each 8bit char)
This type of loop will actually read through all parts of your string and do the correct constructions.
A last remark
To construct a bitset directly from your string, you would have to use the raw memory in bits and from that construct the bitset.
You could construct 8 bit bitsets from each char and append those to each other, or create a string from each 8 bit bitset, concatenate those and then use the final string of 1 and 0 to construct a large bitset of arbitrary size.
I hope it helped.
I made a little program to determine the length of a user-provided integer:
#include <iostream>
using namespace std;
int main()
{
int c=0; //counter for loop
int q=1; //quotient of number upon division
cout << "Hello Cerberus! Please enter a number." << endl;
cin >> q;
if(q > -10 && q < 10)
{
cout << "The number you entered is 1 digit long." << endl;
}
else
{
while(q != 0)
{
q=q/10;
c++;
}
cout << "The number you entered is " << c << " digits long." << endl;
}
return 0;
}
It works quite nicely, unless the numbers get too big. Once the input is 13 digits long or so, the program defaults to "The number you entered is 1 digit long" (it shouldn't even present that solution unless the number is between -10 and 10).
Is there a length limit for user-input integers, or is this demonstrative of my computer's memory limits?
It's a limit in your computer's architecture. Every numeric type has a fixed upper limit, because the type describes data with a fixed size. For example, your int is likely to take up either four or eight bytes in memory (depending on CPU; based on your observations, I'd say the former), and there are only so many combinations of bits that can be stored in so many bytes of memory.
You can determine the range of int on your platform using std::numeric_limits, but personally I recommend sticking with the fixed-width type aliases (e.g. int32_t, int64_t) and picking whichever ones have sufficient range for your application.
Alternatively, there do exist so-called "bigint" libraries that are essentially classes wrapping integer arrays and adding clever functionality to make arbitrarily-large values work as if they were of arithmetic types. That's probably overkill for you here though.
Just don't be tempted to start using floating-point types (float, double) for their magic range-enhancing abilities; just like with the integral types, their precision is fundamentally limited, but using floating-point types adds additional problems and concerns on top.
There is no fundamental limit on user input, though. That's because your stream is converting text characters, and your stream can basically have as many text characters in it as you could possibly imagine. At that level, you're really only limited by available memory.
I've been working on an assignment where I've to use bitwise operators to (OR, AND, or NOT )
the Program has a fixed 4X4 matrix and the user suppose to enter a query to the program ANDing two BINARY numbers, ORing them ...etc
the problem is the "zero leading" binary numbers for example:0111 are shown with value 73
even when I manage to cout it with setfill() and setw()
I can't perform the bitwise operation on the actual binary value!
N.B: I've tried strings instead of ints but the bitwise operation still doesn't apply.
For Example:
if I want to AND two binary values let's say
int x=1100 and int y=0100 in another int z
z=x&y;
the result suppose to be 0100
But the result that appears is 64
which also the result that appears if I tried to print y to the screen
#include <iostream>
#include <string>
#include <iomanip>
using namespace std;
int main()
{
int Matrix[4][4]={{1,1,0,0},{1,1,0,1},{1,1,0,1},{0,1,0,0}};
string Doc[4]={"Doc1","Doc2","Doc3","Doc4"};
string Term[4]={"T1","T2","T3","T4"};
cout << "THE MATRIX IS:"<<endl;
for(int i=0;i<4;i++)
{
cout<<"\t"<<Doc[i];
}
cout<<"\n";
for(int row=0; row<4;row++)
{
cout<<Term[row]<<"\t";
for(int col=0;col<4;col++)
{
cout<<Matrix[row][col]<<"\t";
}
cout<<endl;
}
int term1=1100;
cout<<"\nTerm1= "<<term1;
int term2=1101;
cout<<"\nTerm2= "<<term2;
int term3=1101;
cout<<"\nTerm3= "<<term3;
int term4=0100;
cout<<"\nTerm4= "<<setfill('0')<<setw(4)<<term4;
int Q=term1&term4;
cout<<"\n Term1 and Term4 ="<<Q;
system("pause");
return 0;
}
When you write 0111 in your code the compiler will assume it's octal since octal numbers start with zero. If you wrote 111 it would be decimal.
C++14 added binary literal prefix so you can write 0b111 to get what you want.
Your question still not clear. You have said you have 4x4 matrix, what type of matrix or 2D array is it? So maybe you can elaborate more.
Regarding dealing with binaries, what students usually confuse about, is that if you are using integer variables, you can use bitwise manipulation over these variables and the result will still be read as an integer format. And if you happen to seek seeing what is happening during the bitwise manipulation and visualize the process, you can always use bitset object as follow.
#include <iostream>
#include <bitset>
int main() {
int a = 7, b = a>>3, c = a<<2;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<8>(c) << std::endl;
}
Which should print
00000111
00000000
00011100
So play around with your variables and then visualize them as binaries using bitset is the best way to teach you how HEX, OCT, DEC, and BIN representation works.
And by the way if you are reading 73 as an integer, then this memory address stores 0100 1001 as binary if it's unsigned, and 111 as Octal which is base 8 number representation. See http://coderstoolbox.net/number/
Best of luck