virtual machine for lc3 - c++

Hello I can’t figure out why in the add instruction I need to and by 7
this is the cpp code for the Add instruction
uint16_t dr = (instr >> 9) & 0b111;
uint16_t sr1 = (instr >> 6) & 0b111;
uint16_t sr2 = instr & 0b111;
uint16_t second = registers[sr2];
uint16_t immediateFlag = (instr >> 5) & 0b1;
if (immediateFlag) {
uint16_t imm5 = instr & 0b11111;
second = signExtend(imm5, 5);
}
registers[dr] = registers[sr1] + second;
all the lines anded with 7 are the parts I don’t get.
This is how the instruction looks like:
bits 15-12 opcode(0001)
bits 11-9 destination register
bits 8-6 source1
bits 5 0 or 1 (immediate mode)
bits 4-3 nothing
bits 2-0 source2
How does this 0b111 (7 in decimal) come into play and why?

Take a look at the first line of code: it tries to decode the destination register, which is in bits 9-11 of your input number.
Assuming instr has 16 bits abcdefgh ijklmnop, then we want to extract bits 9-11, which is efg:
instr >> 9 shifts everything to the right by 9 bits, but the answer still has 16 bits: 00000000 0abcdefg.
& 0b111 is a shorthand for & 0b00000000 00000111, so applying that to instr >> 9 results in 00000000 00000efg, or exactly the three bits we were hoping to extract.

Related

Masking and shifting, uint8_t into uint32_t

excuse me if this is a newbie question:
I have four uint8_t variables: first, second, third, fourth.
My objective is to put them in a uint32_t, such as that the uint32_t will be composed as:
fourth-third-second-first, but only putting the first 7 less significant bits of every uint8_t into the uint32_t, and so padding the 4 most significant bits of the uint32_t with zeros.
For example, let's say:
first = 10000000 -> I'm gonna put 0000000
second = 10011001 -> I'm gonna put 0011001
third = 10101010 -> I'm gonna put 0101010
fourth = 01111111 -> I'm gonna put 1111111
The uint32_t should end up being:
00001111 11101010 10001100 10000000
That is: 4zerosOfPadding-fourth-third-second-first
How can I do this using masking and shifting?
Edit:
What I tried is:
uint32_t target = 0;
uint8_t first = 128, second = 153, third = 170, fourth = 127;
//127 = 0111 1111
target = (first & 127);
target = (target >> 7) | (second & 127);
target = (target >> 14) | (third & 127);
target = (target >> 21) | (fourth & 127);
But what I get with this is that I just overwrite target everytime with the 7 less significant bits of the current uint8_t. I can't understand how to use shifting properly.
Thanks everyone for the help.

Counting bits of ones in Byte by time Complexity O(1) C++ code

I've searched an algorithm that counts the number of ones in Byte by time complexity of O(1)
and what I found in google:
// C++ implementation of the approach
#include <bits/stdc++.h>
using namespace std;
int BitsSetTable256[256];
// Function to initialise the lookup table
void initialize()
{
// To initially generate the
// table algorithmically
BitsSetTable256[0] = 0;
for (int i = 0; i < 256; i++)
{
BitsSetTable256[i] = (i & 1) +
BitsSetTable256[i / 2];
}
}
// Function to return the count
// of set bits in n
int countSetBits(int n)
{
return (BitsSetTable256[n & 0xff] +
BitsSetTable256[(n >> 8) & 0xff] +
BitsSetTable256[(n >> 16) & 0xff] +
BitsSetTable256[n >> 24]);
}
// Driver code
int main()
{
// Initialise the lookup table
initialize();
int n = 9;
cout << countSetBits(n);
}
I understand what I need 256 size of the array (in other words size of the look up table) for indexing from 0 to 255 which they are all the decimals value that Byte represents !
but in the function initialize I didn't understand the terms inside the for loop:
BitsSetTable256[i] = (i & 1) + BitsSetTable256[i / 2];
Why Im doing that?! I didn't understand what's the purpose of this row code inside the for loop.
In addition , in the function countSetBits , this function returns:
return (BitsSetTable256[n & 0xff] +
BitsSetTable256[(n >> 8) & 0xff] +
BitsSetTable256[(n >> 16) & 0xff] +
BitsSetTable256[n >> 24]);
I didn't understand at all what Im doing and bitwise with 0xff and why Im doing right shift ..
may please anyone explain to me the concept?! I didn't understand at all why in function countSetBits at BitsSetTable256[n >> 24] we didn't do and wise by 0xff ?
I understand why I need the lookup table with size 2^8 , but the other code rows that I mentioned above didn't understand, could anyone please explain them to me in simple words? and what's purpose for counting the number of ones in Byte?
thanks alot guys!
Concerning the first part of question:
// Function to initialise the lookup table
void initialize()
{
// To initially generate the
// table algorithmically
BitsSetTable256[0] = 0;
for (int i = 0; i < 256; i++)
{
BitsSetTable256[i] = (i & 1) +
BitsSetTable256[i / 2];
}
}
This is a neat kind of recursion. (Please, note I don't mean "recursive function" but recursion in a more mathematical sense.)
The seed is BitsSetTable256[0] = 0;
Then every element is initialized using the (already existing) result for i / 2 and adds 1 or 0 for this. Thereby,
1 is added if the last bit of index i is 1
0 is added if the last bit of index i is 0.
To get the value of last bit of i, i & 1 is the usual C/C++ bit mask trick.
Why is the result of BitsSetTable256[i / 2] a value to built upon?
The result of BitsSetTable256[i / 2] is the number of all bits of i the last one excluded.
Please, note that i / 2 and i >> 1 (the value (or bits) shifted to right by 1 whereby the least/last bit is dropped) are equivalent expressions (for positive numbers in the resp. range – edge cases excluded).
Concerning the other part of the question:
return (BitsSetTable256[n & 0xff] +
BitsSetTable256[(n >> 8) & 0xff] +
BitsSetTable256[(n >> 16) & 0xff] +
BitsSetTable256[n >> 24]);
n & 0xff masks out the upper bits isolating the lower 8 bits.
(n >> 8) & 0xff shifts the value of n 8 bits to right (whereby the 8 least bits are dropped) and then again masks out the upper bits isolating the lower 8 bits.
(n >> 16) & 0xff shifts the value of n 16 bits to right (whereby the 16 least bits are dropped) and then again masks out the upper bits isolating the lower 8 bits.
(n >> 24) & 0xff shifts the value of n 24 bits to right (whereby the 24 least bits are dropped) which should make effectively the upper 8 bits the lower 8 bits.
Assuming that int and unsigned have usually 32 bits on nowadays common platforms this covers all bits of n.
Please, note that the right shift of a negative value is implementation-defined.
(I recalled Bitwise shift operators to be sure.)
So, a right-shift of a negative value may fill all upper bits with 1s.
That can break BitsSetTable256[n >> 24] resulting in (n >> 24) > 256 and hence BitsSetTable256[n >> 24] an out of bound access.
The better solution would've been:
return (BitsSetTable256[n & 0xff] +
BitsSetTable256[(n >> 8) & 0xff] +
BitsSetTable256[(n >> 16) & 0xff] +
BitsSetTable256[(n >> 24) & 0xff]);
BitsSetTable256[0] = 0;
...
BitsSetTable256[i] = (i & 1) +
BitsSetTable256[i / 2];
The above code seeds the look-up table where each index contains the number of ones for the number used as index and works as:
(i & 1) gives 1 for odd numbers, otherwise 0.
An even number will have as many binary 1 as that number divided by 2.
An odd number will have one more binary 1 than that number divided by 2.
Examples:
if i==8 (1000b) then (i & 1) + BitsSetTable256[i / 2] ->
0 + BitsSetTable256[8 / 2] = 0 + index 4 (0100b) = 0 + 1 .
if i==7 (0111b) then 1 + BitsSetTable256[7 / 2] = 1 + BitsSetTable256[3] = 1 + index 3 (0011b) = 1 + 2.
If you want some formal mathematical proof why this is so, then I'm not the right person to ask, I'd poke one of the math sites for that.
As for the shift part, it's just the normal way of splitting up a 32 bit value in 4x8, portably without care about endianess (any other method to do that is highly questionable). If we un-sloppify the code, we get this:
BitsSetTable256[(n >> 0) & 0xFFu] +
BitsSetTable256[(n >> 8) & 0xFFu] +
BitsSetTable256[(n >> 16) & 0xFFu] +
BitsSetTable256[(n >> 24) & 0xFFu] ;
Each byte is shifted into the LS byte position, then masked out with a & 0xFFu byte mask.
Using bit shifts on int is however code smell and potentially buggy. To avoid poorly-defined behavior, you need to change the function to this:
#include <stdint.h>
uint32_t countSetBits (uint32_t n);
The code in countSetBits takes an int as an argument; apparently 32 bits are assumed. The implementation there is extracting four single bytes from n by shifting and masking; for these four separated bytes, the lookup is used and the number of bits per byte there are added to yield the result.
The initialization of the lookup table is a bit more tricky and can be seen as a form of dynamic programming. The entries are filled in increasing index of the argument. The first expression masks out the least significant bit and counts it; the second expression halves the argument (which could be also done by shifting). The resulting argument is smaller; it is then correctly assumed that the necessary value for the smaller argument is already available in the lookup table.
For the access to the lookup table, consider the following example:
input value (contains 5 ones):
01010000 00000010 00000100 00010000
input value, shifting is not necessary
masked with 0xff (11111111)
00000000 00000000 00000000 00010000 (contains 1 one)
input value shifted by 8
00000000 01010000 00000010 00000100
and masked with 0xff (11111111)
00000000 00000000 00000000 00000100 (contains 1 one)
input value shifted by 16
00000000 00000000 01010000 00000010
and masked with 0xff (11111111)
00000000 00000000 00000000 00000010 (contains 1 one)
input value shifted by 24,
masking is not necessary
00000000 00000000 00000000 01010000 (contains 2 ones)
The extracted values have only the lowermost 8 bits set, which means that the corresponding entries are available in the lookup table. The entries from the lookuptable are added. The underlying idea is that the number of ones in in the argument can be calculated byte-wise (in fact, any partition in bitstrings would be suitable).

Split parts of a uint32_t hex value into smaller parts in C++

I have a uint32_t as follows:
uint32_t midiData=0x9FCC00;
I need to separate this uint32_t into smaller parts so that 9 becomes its own entity, F becomes its own entity, and CC becomes its own entity. If you're wondering what I am doing, I am trying to break up the parts of a MIDI message so that they are easier to manage in my program.
I found this solution, but the problem is I don't know how to apply it to the CC section, and that I am not sure that this method works with C++.
Here is what I have so far:
uint32_t midiData=0x9FCC00;
uint32_t status = 0x0FFFFF & midiData; // Retrieve 9
uint32_t channel = (0xF0FFFF & midiData)>>4; //Retrieve F
uint32_t note = (0xFF00FF & midiData) >> 8; //Retrieve CC
Is this correct for C++? Reason I ask is cause I have never used C++ before and its syntax of using the > and < has always confused me (thus why I tend to avoid it).
You can use bit shift operator >> and bit masking operator & in C++ as well.
There are, however, some issues on how you use it:
Operator v1 & v2 gives a number built from those bits that are set in both v1 and v2, such that, for example, 0x12 & 0xF0 gives 0x10, not 0x02. Further, bit shift operator takes the number of bits, and a single digit in a hex number (which is usually called a nibble), consists of 4 bits (0x0..0xF requires 4 bits). So, if you have 0x12 and want to get 0x01, you have to write 0x12 >>4.
Hence, your shifts need to be adapted, too:
#define BITS_OF_A_NIBBLE 4
unsigned char status = (midiData & 0x00F00000) >> (5*BITS_OF_A_NIBBLE);
unsigned char channel = (midiData & 0x000F0000) >> (4*BITS_OF_A_NIBBLE);
unsigned char note = (midiData & 0x0000FF00) >> (2*BITS_OF_A_NIBBLE);
unsigned char theRest = (midiData & 0x000000FF);
You have it backwards, in a way.
In boolean logic (the & is a bitwise-AND), ANDing something with 0 will exclude it. Knowing that F in hex is 1111 in binary, a line like 0x9FCC00 & 0x0FFFFF will give you all the hex digits EXCEPT the 9, the opposite of what you want.
So, for status:
uint32_t status = 0xF000000 & midiData; // Retrieve 9
Actually, this will give you 0x900000. If you want 0x9 (also 9 in decimal), you need to bitshift the result over.
Now, the right bitshift operator (say, X >> 4) means move X 4 bits to the right; dividing by 16. That is 4 bits, not 4 hex digits. 1 hex digit == 4 bits, so to get 9 from 0x900000, you need 0x900000 >> 20.
So, to put them together, to get a status of 9:
uint32_t status = (0xF000000 & midiData) >> 20;
A similar process will get you the remaining values you want.
In general I'd recommend shift first, then mask - it's less error prone:
uint8_t cmd = (midiData >> 16) & 0xff;
uint8_t note = (midiData >> 8) & 0x7f; // MSB can't be set
uint8_t velocity = (midiData >> 0) & 0x7f; // ditto
and then split the cmd variable:
uint8_t status = (cmd & 0xf0); // range 0x00 .. 0xf0
uint8_t channel = (cmd & 0x0f); // range 0 .. 15
I personally wouldn't bother mapping the status value back into the range 0 .. 15 - it's commonly understood that e.g. 0x90 is a "note on", and not the plain value 9.

How to store indivudual bits from a variable?

For example:
I got an input = 0x5A ( 0101 1010 ).
I want to store the first 4 bits or the last 4 bit.
unsigned char lower = input & 0xF;
unsigned char upper = (input >> 4) & 0xF;
Note that the last & 0xF is there in case your data type contains more bits than 8.
just use the & operator to apply a mask:
input = 0x5a & 0xf0;
this would yield 0b01010000. Depending on what you want you could shift the selected bits to the right like
input = (0x5a & 0xf0)>>4;
So to get to the lower half you would use
input = 0x5a & 0x0f;

Unsigned integer into little endian form [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
convert big endian to little endian in C [without using provided func]
I'm having trouble with this one part: If I wanted to take a 32 bit number, and I want to shift its bytes (1 byte = 8 bits) from big endian to little endian form. For example:
Lets say I have the number 1.
In 32 bits this is what it would look like:
1st byte 2nd byte 3rd byte 4th byte
00000000 00000000 00000000 00000001
I want it so that it looks like this:
4th byte 3rd byte 2nd byte 1st byte
00000001 00000000 00000000 00000000
so that the byte with the least significant value appears first. I was thinking you can use a for loop, but I'm not exactly sure on how to shift bits/bytes in C++. For example if a user entered in 1 and I had to shift it's bits like the above example, I'm not sure how I would convert 1 into bits, then shift. Could anyone point me in the right direction? Thanks!
<< and >> is the bitwise shift operators in C and most other C style languages.
One way to do what you want is:
int value = 1;
uint x = (uint)value;
int valueShifted =
( x << 24) | // Move 4th byte to 1st
((x << 8) & 0x00ff0000) | // Move 2nd byte to 3rd
((x >> 8) & 0x0000ff00) | // Move 3rd byte to 2nd
( x >> 24); // Move 4th byte to 1st
uint32_t n = 0x00000001;
std::reverse( (char*)&n, (char*)(&n + 1) );
assert( n == 0x01000000 );
Shifting is done with the << and >> operators. Together with the bit-wise AND (&) and OR (|) operators you can do what you want:
int value = 1;
int shifted = value << 24 | (value & 0x0000ff00) << 8 | (value & 0x00ff0000) >> 8 | (value & 0xff000000) >> 24;