I have problem with reading my binary file. When I read binary file that contains strings it reads perfectly. But when I try read file that looks something like this:
1830 3030 3030 3131 3031 3130 3000 0000
0000 0000 0000 0000 1830 3030 3030 3131
3030 3030 3100 0000 0000 0000 0000 0000
1830 3030 3030 3131 3030 3131 3000 0000
0000 0000 0000 0000 1830 3030 3030 3131
3031 3030 3000 0000 0000 0000 0000 0000
1830 3030 3030 3131 3031 3131 3100 0000
0000 0000 0000 0000 1830 3030 3030 3131
3130 3130 3100 0000 0000 0000 0000 0000 ... and so on
it reads just portion of it. This is my code for reading and converting the binary file to string.
string toString (const char *c, int size);
int main(int argc, char* argv[])
{
streampos size;
char * memblock;
ifstream file (argv[1], ios::in|ios::binary|ios::ate);
size = file.tellg();
memblock = new char[size];
file.seekg (0, ios::beg);
file.read (memblock, size);
file.close();
string input = toString(memblock,size);
cout << input << endl; //this prints just portion of it 000001101100
return 0;
}
string toString (const char *c, int size)
{
string s;
if (c[size-1] == '\0')
{
s.append(c);
}
else
{
for(int i = 0; i < size; i++)
{
s.append(1,c[i]);
}
}
return s;
}
But when I try to read txt file of 0 and 1 it reads just fine.
I am pretty new to c++, and I'm not quite sure why is that.
Your problem is that you're using cout. This is designed to print human-readable strings, not binary. So the line that you flagged:
cout << input << endl; //this prints just portion of it 000001101100
would only print a portion of it.
The binary data you gave was:
1830 3030 3030 3131 3031 3130 3000 0000
Here is the ASCII for the first line of the data:
<CAN> "000001101100" <NUL> <NUL> <NUL>
The first <CAN> is 0x18 - the <NUL> has the value 0 - and that's where cout stops: it prints human-readable ASCII values until it encounters a 0 - your data is full of them.
You need to print the hex values of the characters - a much more involved process.
Related
char char_ = '3';
unsigned int * custom_mem_address = (unsigned int *) &char_;
cout<<char_<<endl;
cout << *custom_mem_address<<endl;
Since custom_mem_address contains one byte value of char '3', I except it to contain the ascii value of '3' which is 51.
But the output is the following.
3
1644042035
Depending on the byte alignment at least one byte in the 1644042035 should be 51 right? But its not. Can you please explain.
Can someone explain where am I wrong
1644042035 in binary is 0110 0001 1111 1110 0001 0111 0011 0011 and 51 is 0011 0011.
0110 0001 1111 1110 0001 0111 0011 0011
0000 0000 0000 0000 0000 0000 0011 0011
Isn't that what you are looking for?
I'm creating a D&D engine for fun just to practice my c++ skills and learn some of the more in depth topics. Currently, I am working on building a system to save and load characters. I have a Stats class, that holds all of the statistics for a character, and a character class that currently just has a name and a stats* to a stats object for that character.
So far, I've been able to successfully save the data using boost text archive, and now switched to boost binary archive. It appears to work when saving the data, but when I try to load the data I get this error:
"Exception Unhandled - Unhandled exception at [memory address] in VileEngine.exe Microsoft C++ exception: boost::archive::archive_exception at memory location [different mem address]"
I can skip past this error multiple times but when the program runs and loads, the data of the loaded character is way off so I know it has to be either in the way I'm saving it, or more likely in the way I'm loading it. I've tried reading through the boost docs but couldn't find a way to fix it. I also tried searching through other posts but couldn't find an answer, or maybe I just don't understand the answers. Any help is greatly appreciated.
Relevant code posted below. I can post all the code if needed but it's quite a bit for all the classes.
in Character.hpp
private:
friend class boost::serialization::access; //allows serialization saving
//creates the template class used by boost to serialize the classes data
//serialize is call whenever this class is attempting to be saved
template<class Archive>
void serialize(Archive& ar, const unsigned int version) {
ar << name;
ar << *charStats;
ar << inventory;
}
/*********************************
* Data Members
***********************************/
std::string name;
Stats* charStats;
std::vector<std::string> inventory;
public:
Character();
void loadCharacter(std::string &charName); //saves all character details
void saveCharacter(); //loads all character details
in Character.cpp
/*********************************************
Functions to save and load character details
**********************************************/
void Character::saveCharacter() {
//save all details of character to charactername.dat file
//create filename of format "CharacterName.dat"
std::string fileName = name + ".dat";
std::ofstream saveFile(fileName);
//create serialized archive and save this characters data
boost::archive::binary_oarchive outputArchive(saveFile);
outputArchive << this;
saveFile.close();
}
void Character::loadCharacter(std::string &charName) {
//load details of .dat file into character using the characters name
std::string fileName = charName + ".dat";
std::ifstream loadFile(fileName);
boost::archive::binary_iarchive inputArchive(loadFile);
inputArchive >> name;
Stats* temp = new Stats;
inputArchive >> temp;
charStats = temp;
inputArchive >> inventory;
loadFile.close();
}
in Stats.hpp
private:
friend class boost::serialization::access; //allows serialization saving
//creates the template class used by boost to serialize the classes data
//serialize is call whenever this class is attempting to be saved
template<class Archive>
void serialize(Archive& ar, const unsigned int version) {
ar & skillSet;
ar & subSkillMap;
ar & level;
ar & proficiencyBonus;
}
When you save, you ONLY write this (by pointer, which is an error, see below):
boost::archive::binary_oarchive outputArchive(saveFile);
outputArchive << this;
Whe you load, you somehow read three separate things. Why? They should obviously match. And 100%. So:
void Character::saveCharacter() {
std::ofstream saveFile(name + ".dat");
boost::archive::binary_oarchive outputArchive(saveFile);
outputArchive << *this;
}
You save *this (by reference) because you do not want the deserialization to allocate a new instance of Character on the heap. If you do, you cannot make it a member function.
Regardless, your serialize function uses operator<< where it MUST use operator& because otherwise it will only work for save, not load. Your compiler would have told you, so clearly, your code is different from what you posted.
See it live: Live On Coliru
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <boost/serialization/access.hpp>
#include <boost/serialization/set.hpp>
#include <boost/serialization/map.hpp>
#include <boost/serialization/vector.hpp>
#include <boost/serialization/string.hpp>
#include <fstream>
struct Stats{
private:
std::set<int> skillSet{1, 2, 3};
std::map<int, std::string> subSkillMap{
{1, "one"},
{2, "two"},
{3, "three"},
};
int level = 13;
double proficiencyBonus = 0;
friend class boost::serialization::access; //allows serialization saving
template <class Archive> void serialize(Archive& ar, unsigned)
{
ar & skillSet;
ar & subSkillMap;
ar & level;
ar & proficiencyBonus;
}
};
struct Character {
private:
friend class boost::serialization::access; // allows serialization saving
template <class Archive>
void serialize(Archive& ar, const unsigned int version)
{
ar & name;
ar & *charStats;
ar & inventory;
}
/*********************************
* Data Members
*********************************/
std::string name;
Stats* charStats = new Stats{};
std::vector<std::string> inventory;
public:
Character(std::string name = "unnamed") : name(std::move(name)){}
~Character() { delete charStats; }
// rule of three (suggest to use no raw pointers!)
Character(Character const&) = delete;
Character& operator=(Character const&) = delete;
void loadCharacter(std::string const& charName);
void saveCharacter();
};
/*********************************************
Functions to save and load character details
**********************************************/
void Character::saveCharacter() {
std::ofstream saveFile(name + ".dat");
boost::archive::binary_oarchive outputArchive(saveFile);
outputArchive << *this;
}
void Character::loadCharacter(std::string const &charName) {
std::ifstream loadFile(charName + ".dat");
boost::archive::binary_iarchive inputArchive(loadFile);
inputArchive >> *this;
loadFile.close();
}
int main() {
{
Character charlie { "Charlie" }, bokimov { "Bokimov" };
charlie.saveCharacter();
bokimov.saveCharacter();
}
{
Character someone, someone_else;
someone.loadCharacter("Charlie");
someone_else.loadCharacter("Bokimov");
}
}
Saves two files and loads them back:
==== Bokimov.dat ====
00000000: 1600 0000 0000 0000 7365 7269 616c 697a ........serializ
00000010: 6174 696f 6e3a 3a61 7263 6869 7665 1300 ation::archive..
00000020: 0408 0408 0100 0000 0000 0000 0007 0000 ................
00000030: 0000 0000 0042 6f6b 696d 6f76 0000 0000 .....Bokimov....
00000040: 0003 0000 0000 0000 0000 0000 0001 0000 ................
00000050: 0002 0000 0003 0000 0000 0000 0000 0300 ................
00000060: 0000 0000 0000 0000 0000 0000 0000 0001 ................
00000070: 0000 0003 0000 0000 0000 006f 6e65 0200 ...........one..
00000080: 0000 0300 0000 0000 0000 7477 6f03 0000 ..........two...
00000090: 0005 0000 0000 0000 0074 6872 6565 0d00 .........three..
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
000000b0: 0000 0000 0000 0000 0000 00 ...........
==== Charlie.dat ====
00000000: 1600 0000 0000 0000 7365 7269 616c 697a ........serializ
00000010: 6174 696f 6e3a 3a61 7263 6869 7665 1300 ation::archive..
00000020: 0408 0408 0100 0000 0000 0000 0007 0000 ................
00000030: 0000 0000 0043 6861 726c 6965 0000 0000 .....Charlie....
00000040: 0003 0000 0000 0000 0000 0000 0001 0000 ................
00000050: 0002 0000 0003 0000 0000 0000 0000 0300 ................
00000060: 0000 0000 0000 0000 0000 0000 0000 0001 ................
00000070: 0000 0003 0000 0000 0000 006f 6e65 0200 ...........one..
00000080: 0000 0300 0000 0000 0000 7477 6f03 0000 ..........two...
00000090: 0005 0000 0000 0000 0074 6872 6565 0d00 .........three..
000000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
000000b0: 0000 0000 0000 0000 0000 00 ...........
I have FORTRAN 77 binary file (created on Sun Sparc machine,big endian). I want to read it on my little endian machine. I have come across this
http://paulbourke.net/dataformats/reading/
Paul has written these macros for C or C++, but I do not understand what they really do.
#define SWAP_2(x) ( (((x) & 0xff) << 8) | ((unsigned short)(x) >> 8) )
#define SWAP_4(x) ( ((x) << 24) | (((x) << 8) & 0x00ff0000) | \
(((x) >> 8) & 0x0000ff00) | ((x) >> 24) )
#define FIX_SHORT(x) (*(unsigned short *)&(x) = SWAP_2(*(unsigned short *)&(x)))
#define FIX_LONG(x) (*(unsigned *)&(x) = SWAP_4(*(unsigned *)&(x)))
#define FIX_FLOAT(x) FIX_LONG(x)
I know that every record of the file contains contains
x,y,z,t,d,i
i is integer*2,all other variables are real*4.
First 512 bytes hexdump
0000000 0000 1800 0000 0000 0000 0000 0000 0000
0000010 0000 0000 0000 0000 ffff ffff 0000 1800
0000020 0000 1800 003f 0000 0000 0000 233c 0ad7
0000030 0000 0000 233c 0ad7 0000 0100 0000 1800
0000040 0000 1800 803f 0000 0000 0000 233c 0ad7
0000050 0000 0000 233c 0ad7 0000 0100 0000 1800
0000060 0000 1800 c03f 0000 0000 0000 233c 0ad7
0000070 0000 0000 233c 0ad7 0000 0100 0000 1800
0000080 0000 1800 0040 0000 0000 0000 233c 0ad7
0000090 0000 0000 233c 0ad7 0000 0100 0000 1800
00000a0 0000 1800 2040 0000 0000 0000 233c 0ad7
00000b0 0000 0000 233c 0ad7 0000 0100 0000 1800
00000c0 0000 1800 4040 0000 0000 0000 233c 0ad7
00000d0 0000 0000 233c 0ad7 0000 0100 0000 1800
00000e0 0000 1800 6040 0000 0000 0000 233c 0ad7
00000f0 0000 0000 233c 0ad7 0000 0100 0000 1800
0000100 0000 1800 8040 0000 0000 0000 233c 0ad7
0000110 0000 0000 233c 0ad7 0000 0100 0000 1800
0000120 0000 1800 9040 0000 0000 0000 233c 0ad7
0000130 0000 0000 233c 0ad7 0000 0100 0000 1800
0000140 0000 1800 a040 0000 0000 0000 233c 0ad7
0000150 0000 0000 233c 0ad7 0000 0100 0000 1800
0000160 0000 1800 b040 0000 0000 0000 233c 0ad7
0000170 0000 0000 233c 0ad7 0000 0100 0000 1800
0000180 0000 1800 c040 0000 0000 0000 233c 0ad7
0000190 0000 0000 233c 0ad7 0000 0100 0000 1800
00001a0 0000 1800 d040 0000 0000 0000 233c 0ad7
00001b0 0000 0000 233c 0ad7 0000 0100 0000 1800
00001c0 0000 1800 e040 0000 0000 0000 233c 0ad7
00001d0 0000 0000 233c 0ad7 0000 0100 0000 1800
00001e0 0000 1800 f040 0000 0000 0000 233c 0ad7
00001f0 0000 0000 233c 0ad7 0000 0100 0000 1800
0000200
My code to read file
#include <endian.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE *file;
char *buffer;
char *rec;
long fileLen;
file = fopen("rec.in", "rb");
fseek(file, 0, SEEK_END);
fileLen=ftell(file);
fseek(file, 0, SEEK_SET);
buffer=(char *)malloc(fileLen+1);
fread(buffer, fileLen, 1, file);
fclose(file);
free(buffer);
char *curr = buffer;
char *end = buffer + fileLen;
constexpr int LINE_SIZE = sizeof(float)*5 + sizeof(uint16_t); //based upon your "x,y,z,t,d,i" description
while(curr < end) {
uint32_t temp = be32toh(*reinterpret_cast<uint32_t*>(*curr));
float x = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+sizeof(float))));
float y = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+2*sizeof(float))));
float z = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+3*sizeof(float))));
float t = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+4*sizeof(float))));
float d = *reinterpret_cast<float*>(&temp);
uint16_t i = be16toh(*reinterpret_cast<uint16_t*>(*(curr+5*sizeof(float))));
curr += LINE_SIZE;
}
}
I got two errors
r.cc: In function ‘int main()’:
r.cc:29:1: error: ‘constexpr’ was not declared in this scope
constexpr int LINE_SIZE = sizeof(float)*5 + sizeof(uint16_t); //based upon your "x,y,z,t,d,i" description
^
r.cc:49:13: error: ‘LINE_SIZE’ was not declared in this scope
curr += LINE_SIZE;
If you're reading the file on a linux machine, there are some library functions provided for this purpose in the endian.h header (documentation here). To convert a 16-bit integer to host order (little-endian in your case):
uint16_t hostInteger = be16toh(bigEndianIntegerFromFile);
For floats, you can do something similar but incorporate reinterpretation:
float hostFloat = reinterpret_cast<float>(be32toh(reinterpret_cast<uint32_t>(bigEndianFloatFromFile)));
Or, if you read it as an unsigned int in the first place, you don't need the inner reinterpret_cast:
float hostFloat = reinterpret_cast<float>(be32toh(bigEndianUint32FromFile));
UPDATE: Given your code, you could read the file by inserting this between your fclose and free calls:
char *curr = buffer;
char *end = buffer + fileLen;
constexpr int LINE_SIZE = sizeof(float)*5 + sizeof(uint16_t); //based upon your "x,y,z,t,d,i" description
while(curr < end) {
uint32_t temp = be32toh(*reinterpret_cast<uint32_t*>(*curr));
float x = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+sizeof(float))));
float y = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+2*sizeof(float))));
float z = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+3*sizeof(float))));
float t = *reinterpret_cast<float*>(&temp);
temp = be32toh(*reinterpret_cast<uint32_t*>(*(curr+4*sizeof(float))));
float d = *reinterpret_cast<float*>(&temp);
uint16_t i = be16toh(*reinterpret_cast<uint16_t*>(*(curr+5*sizeof(float))));
curr += LINE_SIZE;
...
//do something with these values
...
}
I'm trying to figure out the purpose of this piece of code, from the Tiled utility's map format documentation.
const int gid = data[i] |
data[i + 1] << 8 |
data[i + 2] << 16 |
data[i + 3] << 24;
It looks like there is some "or-ing" and shifting of bits, but I have no clue what the aim of this is, in the context of using data from the tiled program.
Tiled stores its layer "Global Tile ID" (GID) data in an array of 32-bit integers, base64-encoded and (optionally) compressed in the XML file.
According to the documentation, these 32-bit integers are stored in little-endian format -- that is, the first byte of the integer contains the least significant byte of the number. As an analogy, in decimal, writing the number "1234" in little-endian would look like 4321 -- the 4 is the least significant digit in the number (representing a value of just 4), the 3 is the next-least-significant (representing 30), and so on. The only difference between this example and what Tiled is doing is that we're using decimal digits, while Tiled is using bytes, which are effectively digits that can each hold 256 different values instead of just 10.
If we think about the code in terms of decimal numbers, though, it's actually pretty easy to understand what it's doing. It's basically reconstructing the integer value from the digits by doing just this:
int digit[4] = { 4, 3, 2, 1 }; // our decimal digits in little-endian order
int gid = digit[0] +
digit[1] * 10 +
digit[2] * 100 +
digit[3] * 1000;
It's just moving each digit into position to create the full integer value. (In binary, bit shifting by multiples of 8 is like multiplying by powers of 10 in decimal; it moves a value into the next 'significant digit' slot)
More information on big-endian and little-endian and why the difference matters can be found in On Holy Wars And A Plea For Peace, an important (and entertainingly written) document from 1980 in which Danny Cohen argued for the need to standardise on a single byte ordering for network protocols. (spoiler: big-endian eventually won that fight, and so the big-endian representation of integers is now the standard way to represent integers in files and network transmissions -- and has been for decades. Tiled's use of little-endian integers in their file format is somewhat unusual. And results in needing code like the code you quoted in order to reliably convert the little-endian integers in the data file into the computer's native format. If they'd stored their data in the standard big-endian format, every OS provides standard utility functions for converting back and forth from big-endian to native, and you could simply have called ntohl() to assemble the native-format integer, instead of needing to write and comprehend this sort of byte manipulation code manually).
As you noted, the << operator shifts bits to the left by the given number.
This block takes the data[] array, which has four (presumably one byte) elements, and "encodes" those four values into one integer.
Example Time!
data[0] = 0x3A; // 0x3A = 58 = 0011 1010 in binary
data[1] = 0x48; // 0x48 = 72 = 0100 1000 in binary
data[2] = 0xD2; // 0xD2 = 210 = 1101 0010 in binary
data[3] = 0x08; // 0x08 = 8 = 0000 1000 in binary
int tmp0 = data[0]; // 00 00 00 3A = 0000 0000 0000 0000 0000 0000 0011 1010
int tmp1 = data[1] << 8; // 00 00 48 00 = 0000 0000 0000 0000 0100 1000 0000 0000
int tmp2 = data[2] << 16; // 00 D2 00 00 = 0000 0000 1101 0010 0000 0000 0000 0000
int tmp3 = data[3] << 24; // 08 00 00 00 = 0000 1000 0000 0000 0000 0000 0000 0000
// "or-ing" these together will set each bit to 1 if any of the bits are 1
int gid = tmp1 | // 00 00 00 3A = 0000 0000 0000 0000 0000 0000 0011 1010
tmp2 | // 00 00 48 00 = 0000 0000 0000 0000 0100 1000 0000 0000
tmp3 | // 00 D2 00 00 = 0000 0000 1101 0010 0000 0000 0000 0000
tmp4; // 08 00 00 00 = 0000 1000 0000 0000 0000 0000 0000 0000
gid == 147998778;// 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
Now, you've just encoded four one-byte values into a single four-byte integer.
If you're (rightfully) wondering, why would anyone want to go through all that effort when you can just use byte and store the four single-byte pieces of data directly into four bytes, then you should check out this question:
int, short, byte performance in back-to-back for-loops
Bonus Example!
To get your encoded values back, we use the "and" operator along with the right-shift >>:
int gid = 147998778; // 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
// "and-ing" will set each bit to 1 if BOTH bits are 1
int tmp0 = gid & // 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
0x000000FF; // 00 00 00 FF = 0000 0000 0000 0000 0000 0000 1111 1111
int data0 = tmp0; // 00 00 00 3A = 0000 0000 0000 0000 0000 0000 0011 1010
int tmp1 = gid & // 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
0x0000FF00; // 00 00 FF 00 = 0000 0000 0000 0000 1111 1111 0000 0000
tmp1; //value of tmp1 00 00 48 00 = 0000 0000 0000 0000 0100 1000 0000 0000
int data1 = tmp1 >> 8; // 00 00 00 48 = 0000 0000 0000 0000 0000 0000 0100 1000
int tmp2 = gid & // 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
0x00FF0000; // 00 FF 00 00 = 0000 0000 1111 1111 0000 0000 0000 0000
tmp2; //value of tmp2 00 D2 00 00 = 0000 0000 1101 0010 0000 0000 0000 0000
int data2 = tmp2 >> 16; // 00 00 00 D2 = 0000 0000 0000 0000 0000 0000 1101 0010
int tmp3 = gid & // 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
0xFF000000; // FF 00 00 00 = 1111 1111 0000 0000 0000 0000 0000 0000
tmp3; //value of tmp3 08 00 00 00 = 0000 1000 0000 0000 0000 0000 0000 0000
int data3 = tmp3 >> 24; // 00 00 00 08 = 0000 0000 0000 0000 0000 0000 0000 1000
The last "and-ing" for tmp3 isn't needed, since the bits that "fall off" when shifting are just lost and the bits coming in are zero. So:
gid; // 08 D2 48 3A = 0000 1000 1101 0010 0100 1000 0011 1010
int data3 = gid >> 24; // 00 00 00 08 = 0000 0000 0000 0000 0000 0000 0000 1000
but I wanted to provide a complete example.
I am quite new to bit masking and bit operations. Could you please help me understanding this. I have three integers a, b, and c and I have created a new number d with below operations:
int a = 1;
int b = 2;
int c = 92;
int d = (a << 14) + (b << 11) + c;
How do we reconstruct a, b and c using d?
I have no idea of the range of your a, b and c. However, assuming 3 bits for a and b, and 11 bits for c we can do:
a = ( d >> 14 ) & 7;
b = ( d >> 11 ) & 7;
c = ( d >> 0 ) & 2047;
Update:
The value of and-mask is computed as: (2^NumberOfBits)-1
a is 0000 0000 0000 0000 0000 0000 0000 0001
b is 0000 0000 0000 0000 0000 0000 0000 0010
c is 0000 0000 0000 0000 0000 0000 0101 1100
a<<14 is 0000 0000 0000 0000 0100 0000 0000 0000
b<<11 is 0000 0000 0000 0000 0001 0000 0000 0000
c is 0000 0000 0000 0000 0000 0000 0101 1100
d is 0000 0000 0000 0000 0101 0000 0101 1100
^ ^ { }
a b c
So a = d>>14
b = d>>11 & 7
c = d>>0 & 2047
By the way ,you should make sure the b <= 7 and c <= 2047