this is a test program to try to get my bitwise shift operations to work. I am hoping to add this to my cache simulator program, but I can't even get this part to work. My plan is to use bit shift (<<) and (>>) to isolate parts of a given memory address (tag, set, word, etc.) but it seems that in shifting the bits back right, it is filling with the values which were previously there, rather than with 0's. Here is the program first.
#include<iostream>
#include <cmath>
struct Address{
unsigned int tag;
unsigned int r;
unsigned int word;
};
int main(){
unsigned int tempAddress = 27; //0011011
int ramSize = 128;
int cacheSize = 64;
int blockSize = 8;
int cacheLines = cacheSize / blockSize;
int addressLength = log(ramSize)/log(2);
int wordBits = log(blockSize)/log(2);
int rBits = log(cacheLines)/log(2);
int tagBits = addressLength - (wordBits + rBits);
struct Address address;
address.tag = tempAddress >> (rBits + wordBits);
address.r = tempAddress << (tagBits) >> (tagBits + wordBits);
address.word = tempAddress << (rBits + tagBits) >> (rBits + tagBits);
std::cout << "tag is: " << address.tag << "\n";
std::cout << "r is: " << address.r << "\n";
std::cout << "word is: " << address.word << "\n";
}
I've found that when my tempAddress is [0-7] it works fine because binary 7 only affects the first 3 bits.
Similarly, when it is [8-63], tag and r are correct because 63 affects the first 6 bits.
Upon testing many addresses, I've found that when shifting right after shifting left, the bits are being replaced with what they were before, rather than with 0s as I think they should be.
(r is the part that is in the middle. I am calling it r because in direct mapping it is called line, and in set-associative mapping it is called set)
EDIT:
As someone pointed out, expected and produced outcome would be helpful. I'd first like to keep the cache size, ram size, and block size constant, and only change the address.
So, given tempAddress = 27(0011011 in binary), word should be 011 (first 3 bits), r should be 011 (next 3), and set should be 0 (remaining bits).
Output is this:
tag is: 0
r is: 3
word is: 27
I've found this to be the trend if every address between 0 and 63(inclusive) that tag and r are correct, but word is equal to address.
Now, for address = 65(1000001) Expected:
tag is: 1
r is: 0
word is: 1
Output:
tag is: 1
r is: 8
word is: 65
With these ram, cache, and block sizes, to find r, I am left shifting 1 time, and right shifting 4 times. To find word, I am left shifting 4 times, then right shifting 4 times. As I understand it, when left shifting the bits on the right are filled with 0, and when right shifting an unsigned integer, the bits on the left are filled with 0. My thought was that if I left shift until I only have the bits I need, then right shift them back to the first bits, I will have the correct values. However, consistent throughout numerous addresses, after left shifting then right shifting, the places that had 1s still do. That is why word is always equal to address, because I am shifting 4 bits both left then right. And r is always equal to the 7 bits shifted right 3 times (because I go left 1 then right 4). Am I misunderstanding how bitwise shifting works?
Related
This question already has answers here:
Right shift with zeros at the beginning
(6 answers)
Closed 2 years ago.
I'm having an issue shifting bits and I cannot seem to understand what is going on. I'm able to shift bits to the left perfectly fine but I am unable to shift bits to the right. Here's a look at what is going on:
Shifting to the right (works as intended):
// bits is a char that = 00011000
// Shift the bits to the left
bits = bits << 3;
// Print the bits
std::bitset<8> x(bits);
std::cout << "Shifted Bits: " << x << std::endl;
This produces the expected output of 11000000 as '00011000' has been shifted to the left 3 bits.
It looks like 1's are added instead of 0s when I attempt to shift to the right, however:
// bits = 11000000 from the previous operation
// Shift the bits to the right
bits = bits >> 5;
// Print the bits
std::bitset<8> y(bits);
std::cout << "Shifted (right) bits: " << y << std::endl;
This produces the unexpected output of 11111110 when the expected output was '00000110' as I was attempting to shift '11000000' 5 places to the right.
Is there an explanation/reason for why the shift operator is adding 0s to the new spaces when I shift to the left but then adding 1s to the new spaces when I shift to the right?
Thanks!
EDIT:
I figured out what was going on. I was extracting bits from an array of DDS_Octets but I was using a char for the bits variable. It looked like this:
DDS_Octet* buffer_ = new DDS_Octet(BUFFSIZE);
fill_buffer(buffer_);
// Here is where the issue is
// I am implicitly casting a DDS_Octet to a char which is a no-no
char bits = buffer_[0];
// The code above now happens here
When I made 'bits' the same type as the buffer I no longer had this issue of 1s being added instead of 0's.
I'll leave this here in case someone has the same problem I did and stumbles upon this.
You use a char for bits. char is signed or unsigned depending on the platform. You should use unsigned char to get a more expected behavior.
I am currently working on a project for school covering bit manipulation. We are supposed to show the bits for an unsigned integer variable and allow the user to manipulate them, turning them on and off and shifting them. I have all of the functionality working, except for displaying the bits once they have been manipulated. We are NOT allowed to use bitset to display the bits, and it will result in a heavy grade reduction.
I have tried using if statements to determine whether the bits are on or off, but this does not seem to be working. Whenever a bit is changed, it will simply print a lot of 0's and 1's.
std::cout << "Bits: ";
for (int i = sizeof(int)*8; i > 0; i--)
{
if (a | (0 << i) == 1)
std::cout << 1;
if (a | (0 << i) == 0)
std::cout << 0;
}
std::cout << std::endl << a;
I would expect that if I turn a bit on, that one bit will display a 1 instead of a 0, with the rest of the bits being unchanged and still displaying 0; instead it prints a string of 1010101 about the length of half the console.
There are a couple of problems here, and you might want to do a detailed review of bit manipulation:
for (int i = sizeof(int)*8; i > 0; i--) should be for (int i = sizeof(int)*8 - 1; i >= 0; i--), because bits are 0-indexed (shifting 1 to the left 0 times gives a set bit on the rightmost position).
We use bitwise AND (&) instead of bitwise OR (|) to check if a bit is set. This is because when we use bitwise AND with a number that only has a single bit set, the result will be a mask with the bit at the position of the 1 being in the same state as the corresponding bit in the original number (since anything AND 1 is itself), and all other bits being 0's (since anything AND 0 is 0).
We want a mask with 1 in the position that we want to check and 0 elsewhere, so we need 1 << i instead of 0 << i.
If the bit we're checking is set, we'll end up with a number that has one bit set, but that's not necessarily 1. So we should check if the result is not equal to 0 instead of checking if it's equal to 1.
The == operator has a higher precedence compared to the | and the & operators, so parenthesis is needed.
I have a vector<char> and I want to be able to get an unsigned integer from a range of bits within the vector. E.g.
And I can't seem to be able to write the correct operations to get the desired output. My intended algorithm goes like this:
& the first byte with (0xff >> unused bits in byte on the left)
<< the result left the number of output bytes * number of bits in a byte
| this with the final output
For each subsequent byte:
<< left by the (byte width - index) * bits per byte
| this byte with the final output
| the final byte (not shifted) with the final output
>> the final output by the number of unused bits in the byte on the right
And here is my attempt at coding it, which does not give the correct result:
#include <vector>
#include <iostream>
#include <cstdint>
#include <bitset>
template<class byte_type = char>
class BitValues {
private:
std::vector<byte_type> bytes;
public:
static const auto bits_per_byte = 8;
BitValues(std::vector<byte_type> bytes) : bytes(bytes) {
}
template<class return_type>
return_type get_bits(int start, int end) {
auto byte_start = (start - (start % bits_per_byte)) / bits_per_byte;
auto byte_end = (end - (end % bits_per_byte)) / bits_per_byte;
auto byte_width = byte_end - byte_start;
return_type value = 0;
unsigned char first = bytes[byte_start];
first &= (0xff >> start % 8);
return_type first_wide = first;
first_wide <<= byte_width;
value |= first_wide;
for(auto byte_i = byte_start + 1; byte_i <= byte_end; byte_i++) {
auto byte_offset = (byte_width - byte_i) * bits_per_byte;
unsigned char next_thin = bytes[byte_i];
return_type next_byte = next_thin;
next_byte <<= byte_offset;
value |= next_byte;
}
value >>= (((byte_end + 1) * bits_per_byte) - end) % bits_per_byte;
return value;
}
};
int main() {
BitValues<char> bits(std::vector<char>({'\x78', '\xDA', '\x05', '\x5F', '\x8A', '\xF1', '\x0F', '\xA0'}));
std::cout << bits.get_bits<unsigned>(15, 29) << "\n";
return 0;
}
(In action: http://coliru.stacked-crooked.com/a/261d32875fcf2dc0)
I just can't seem to wrap my head around these bit manipulations, and I find debugging very difficult! If anyone can correct the above code, or help me in any way, it would be much appreciated!
Edit:
My bytes are 8 bits long
The integer to return could be 8,16,32 or 64 bits wside
The integer is stored in big endian
You made two primary mistakes. The first is here:
first_wide <<= byte_width;
You should be shifting by a bit count, not a byte count. Corrected code is:
first_wide <<= byte_width * bits_per_byte;
The second mistake is here:
auto byte_offset = (byte_width - byte_i) * bits_per_byte;
It should be
auto byte_offset = (byte_end - byte_i) * bits_per_byte;
The value in parenthesis needs to be the number of bytes to shift right by, which is also the number of bytes byte_i is away from the end. The value byte_width - byte_i has no semantic meaning (one is a delta, the other is an index)
The rest of the code is fine. Though, this algorithm has two issues with it.
First, when using your result type to accumulate bits, you assume you have room on the left to spare. This isn't the case if there are set bits near the right boundry and the choice of range causes the bits to be shifted out. For example, try running
bits.get_bits<uint16_t>(11, 27);
You'll get the result 42 which corresponds to the bit string 00000000 00101010 The correct result is 53290 with the bit string 11010000 00101010. Notice how the rightmost 4 bits got zeroed out. This is because you start off by overshifting your value variable, causing those four bits to be shifted out of the variable. When shifting back at the end, this results in the bits being zeroed out.
The second problem has to do with the right shift at the end. If the rightmost bit of the value variable happens to be a 1 before the right shift at the end, and the template parameter is a signed type, then the right shift that is done is an 'arithmetic' right shift, which causes bits on the right to be 1-filled, leaving you with an incorrect negative value.
Example, try running:
bits.get_bits<int16_t>(5, 21);
The expected result should be 6976 with the bit string 00011011 01000000, but the current implementation returns -1216 with the bit string 11111011 01000000.
I've put my implementation of this below which builds the bit string from the right to the left, placing bits in their correct positions to start with so that the above two problems are avoided:
template<class ReturnType>
ReturnType get_bits(int start, int end) {
int max_bits = kBitsPerByte * sizeof(ReturnType);
if (end - start > max_bits) {
start = end - max_bits;
}
int inclusive_end = end - 1;
int byte_start = start / kBitsPerByte;
int byte_end = inclusive_end / kBitsPerByte;
// Put in the partial-byte on the right
uint8_t first = bytes_[byte_end];
int bit_offset = (inclusive_end % kBitsPerByte);
first >>= 7 - bit_offset;
bit_offset += 1;
ReturnType ret = 0 | first;
// Add the rest of the bytes
for (int i = byte_end - 1; i >= byte_start; i--) {
ReturnType tmp = (uint8_t) bytes_[i];
tmp <<= bit_offset;
ret |= tmp;
bit_offset += kBitsPerByte;
}
// Mask out the partial byte on the left
int shift_amt = (end - start);
if (shift_amt < max_bits) {
ReturnType mask = (1 << shift_amt) - 1;
ret &= mask;
}
}
There is one thing you certainly missed I think: the way you index the bits in the vector is different from what you have been given in the problem. I.e. with algorithm you outlined, the order of the bits will be like 7 6 5 4 3 2 1 0 | 15 14 13 12 11 10 9 8 | 23 22 21 .... Frankly, I didn't read through your whole algorithm, but this one was missed in the very first step.
Interesting problem. I've done similar, for some systems work.
Your char is 8 bits wide? Or 16? How big is your integer? 32 or 64?
Ignore the vector complexity for a minute.
Think about it as just an array of bits.
How many bits do you have? You have 8*number of chars
You need to calculate a starting char, number of bits to extract, ending char, number of bits there, and number of chars in the middle.
You will need bitwise-and & for the first partial char
you will need bitwise-and & for the last partial char
you will need left-shift << (or right-shift >>), depending upon which order you start from
what is the endian-ness of your Integer?
At some point you will calculate an index into your array that is bitindex/char_bit_width, you gave the value 171 as your bitindex, and 8 as your char_bit_width, so you will end up with these useful values calculated:
171/8 = 23 //location of first byte
171%8 = 3 //bits in first char/byte
8 - 171%8 = 5 //bits in last char/byte
sizeof(integer) = 4
sizeof(integer) + ( (171%8)>0?1:0 ) // how many array positions to examine
Some assembly required...
I have array from serial read, named sensor_buffer. It contains 21 bytes.
gyro_out_X=((sensor_buffer[1]<<8)+sensor_buffer[2]);
gyro_out_Y=((sensor_buffer[3]<<8)+sensor_buffer[4]);
gyro_out_Z=((sensor_buffer[5]<<8)+sensor_buffer[6]);
acc_out_X=((sensor_buffer[7]<<8)+sensor_buffer[8]);
acc_out_Y=((sensor_buffer[9]<<8)+sensor_buffer[10]);
acc_out_Z=((sensor_buffer[11]<<8)+sensor_buffer[12]);
HMC_xo=((sensor_buffer[13]<<8)+sensor_buffer[14]);
HMC_yo=((sensor_buffer[15]<<8)+sensor_buffer[16]);
HMC_zo=((sensor_buffer[17]<<8)+sensor_buffer[18]);
adc_pressure=(((long)sensor_buffer[19]<<16)+(sensor_buffer[20]<<8)+sensor_buffer[21]);
What does this line do:
variable = (array_var<<8) + next_array_var
What effect does it have on the 8 bits?
<<8 ?
UPDATE:
Any example in another language (java, processing)?
Example for processing: (why use H like header?).
/*
* ReceiveBinaryData_P
*
* portIndex must be set to the port connected to the Arduino
*/
import processing.serial.*;
Serial myPort; // Create object from Serial class
short portIndex = 1; // select the com port, 0 is the first port
char HEADER = 'H';
int value1, value2; // Data received from the serial port
void setup()
{
size(600, 600);
// Open whatever serial port is connected to Arduino.
String portName = Serial.list()[portIndex];
println(Serial.list());
println(" Connecting to -> " + Serial.list()[portIndex]);
myPort = new Serial(this, portName, 9600);
}
void draw()
{
// read the header and two binary *(16 bit) integers:
if ( myPort.available() >= 5) // If at least 5 bytes are available,
{
if( myPort.read() == HEADER) // is this the header
{
value1 = myPort.read(); // read the least significant byte
value1 = myPort.read() * 256 + value1; // add the most significant byte
value2 = myPort.read(); // read the least significant byte
value2 = myPort.read() * 256 + value2; // add the most significant byte
println("Message received: " + value1 + "," + value2);
}
}
background(255); // Set background to white
fill(0); // set fill to black
// draw rectangle with coordinates based on the integers received from Arduino
rect(0, 0, value1,value2);
}
Your code has the same pattern:
value = (partial_value << 8) | (other_partial_value)
Your array has data stored in 8 bit bytes, but the values are in 16 bit bytes. Each of your data points are two bytes, with the most significant byte stored first in your array. This pattern simply builds the full 16 bit value by shifting the most significant byte 8 bits to the left, then OR'ing the least significant byte into the lower 8 bits.
Its a shift operator. It shifts the bits in you variable to the left by 8. Shift by 1 bit to the left is equivalent to multiplying by two (shifting to the right divides by 2). So essentially <<8 is equivalent to multiplying by 2^8.
See here for a list of C++ operators and what they do:
http://en.wikipedia.org/wiki/C%2B%2B_operators
<< is the left bit-shift operator, the result is the bits from the first operand moved to the left, with 0 bits filling in from the right.
A simple example in pseudocode:
x = 10000101;
x = x << 3;
now x is "00101000"
Study the Bitwise operation article on wikipedia for an introduction.
This is just a bit shift operator. If is basically taking the value and shitfing the bits a places to the left. This is equivalent to multiplying the value by 2^8. The code looks like its reading in 2 bytes of the array and creating a 16 bit integer from each pair.
It seems that sensor_buffer is a matrix of chars.
In order to get your value, e.g. gyro_out_X you have to combine sensor_buffer[1] and sensor_buffer[2],
where
sensor_buffer[1] holds the most significant byte and
sensor_buffer[2] holds the least significant byte
in that case
int gyro_out_X=((sensor_buffer[1]<<8)+sensor_buffer[2]);
combines the two bytes:
if sensor_buffer[1] is 0xFF
and sensor_buffer[2] is 0x10
then gyro_out_X is 0xFF10
It shifts the bits 8 places to the left, eg:
0000000001000100 << 8 = 0100010000000000
0000000001000100 << 1 =
0000000010001000 << 1 =
0000000100010000 << 1 =
0000001000100000 << 1 =
0000010001000000 << 1 =
0000100010000000 << 1 =
0001000100000000 << 1 =
0010001000000000 << 1 =
0100010000000000
What does >> do in this situation?
int n = 500;
unsigned int max = n>>4;
cout << max;
It prints out 31.
What did it do to 500 to get it to 31?
Bit shifted!
Original binary of 500:
111110100
Shifted 4
000011111 which is 31!
Original: 111110100
1st Shift:011111010
2nd Shift:001111101
3rd Shift:000111110
4th Shift:000011111 which equals 31.
This is equivilent of doing integer division by 16.
500/16 = 31
500/2^4 = 31
Some facts pulled from here: http://www.cs.umd.edu/class/spring2003/cmsc311/Notes/BitOp/bitshift.html (because blarging from my head results in rambling that is unproductive..these folks state it much cleaner than i could)
Shifting left using << causes 0's to be shifted from the least significant end (the right side), and causes bits to fall off from the most significant end (the left side).
Shifting right using >> causes 0's to be shifted from the most significant end (the left side), and causes bits to fall off from the least significant end (the right side) if the number is unsigned.
Bitshifting doesn't change the value of the variable being shifted. Instead, a temporary value is created with the bitshifted result.
500 got bit shifted to the right 4 times.
x >> y mathematically means x / 2^y.
Hence 500 / 2^4 which is equal to 500 / 16. In integer division the result is 31.
It divided 500 by 16 using integer division.
>> is a right-shift operator, which shifted the bits of the binary representation of n to the right 4 times. This is equivalent to dividing n by 2 4 times, i. e. dividing it by 2^4=16. This is integer division, so the decimal part got truncated.
It shifts the bits of 500 to the right by 4 bit positions, tossing out the rightmost bits as it does so.
500 = 111110100 (binary)
111110100 >> 4 = 11111 = 31
111110100 is 500 in binary. Move the bits to the right and you are left with 11111 which is 31 in binary.
500 in binary is [1 1111 0100]
(4 + 16 + 32 + 64 + 128 + 256)
Shift that to the right 4 times and you lose the lowest 4 bits, resulting in:
[1 1111]
which is 1 + 2 + 4 + 8 + 16 = 31
You can also examine it in Hex:
500(decimal) is 0x1F4(hex).
Then shift to the right 4 bits, or one nibble:
0x1F == 31(dec).
The >> and << operators are shifting operators.
http://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Companion/cxx_crib/shift.html
Of course they may be overloaded just to confuse you a little more!
C++ has nice classes to animate what is going on at the bit level
#include <bitset>
#include <iostream>
int main() {
std::bitset<16> s(500);
for(int i = 0; i < 4; i++) {
std::cout << s << std::endl;
s >>= 1;
}
std::cout << s
<< " (dec " << s.to_ulong() << ")"
<< std::endl;
}