void Map::LoadMap(std::string path, int sizeX, int sizeY) {
char c;
std::fstream mapFile;
mapFile.open(path);
int srcX, srcY;
for(int y = 0; y < sizeY; y++) {
for(int x = 0; x < sizeX; x++) {
mapFile.get(c);
srcY = atoi(&c) * 32;
mapFile.get(c);
srcX = atoi(&c) * 32;
ks::Game::AddTile(srcY, srcX, x * 32, y * 32);
std::cout << "X: " << srcX << " Y:" << srcY << std::endl;
mapFile.ignore();
}
}
mapFile.close();
}
I wont Post my whole map file but the layout is like so
00,01,02,03,44,00,00,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44,44
I am just curious my system is a Mac, I did a similar program on windows and it's read this code character by character however on XCode it is reading the file whole number by whole number and not character by character, Instead of it grabbing the first digit example 4 and multiplying it be 32 it is grabbing 44 then multiplying it by 32 instead.
I simply just want the First digit to be used as a Y coord and the second to be used as a X coord and the "," is skipped.
Apologise in advance if there is something I am overlooking however two minds are better then one any answer would be very much appreciated.
Your code invokes Undefined Behaviour as atoi expects a null-terminated string, which &c is clearly not, it's a pointer to a single character.
Why do you need atoi in the first place? It's a function from C for converting strings like atoi("532532").
If you want to convert a single char to int you can just do it like this.
const int number = c - '0';
Related
I'm trying to write one function that can deinterleave 8/16/24/32 bit audio data, given that the audio data naturally arrives in an 8 bit buffer.
I have this working for 8 bit, and it works for 16/24/32, but only for the first channel (channel 0). I have tried so many + and * and other operators that I'm just guessing at this point. I cannot find the magic formula. I am using C++ but would also accept a memcpy into the vector if that's easiest.
Check out the code. If you change the demux call to another bitrate you will see the problem. There is an easy math solution here I am sure, I just cannot get it.
#include <vector>
#include <map>
#include <iostream>
#include <iomanip>
#include <string>
#include <string.h>
const int bitrate = 8;
const int channel_count = 5;
const int audio_size = bitrate * channel_count * 4;
uint8_t audio_ptr[audio_size];
const int bytes_per_channel = audio_size / channel_count;
void Demux(int bitrate){
int byterate = bitrate/8;
std::map<int, std::vector<uint8_t> > channel_audio;
for(int i = 0; i < channel_count; i++){
std::vector<uint8_t> audio;
audio.reserve(bytes_per_channel);
for(int x = 0; x < bytes_per_channel; x += byterate){
for(int z = 0; z < byterate; z++){
// What is the magic formula!
audio.push_back(audio_ptr[(x * channel_count) + i + z]);
}
}
channel_audio.insert(std::make_pair(i, audio));
}
int remapsize = 0;
std::cout << "\nRemapped Audio";
std::map<int, std::vector<uint8_t> >::iterator it;
for(it = channel_audio.begin(); it != channel_audio.end(); ++it){
std::cout << "\nChannel" << it->first << " ";
std::vector<uint8_t> v = it->second;
remapsize += v.size();
for(size_t i = 0; i < v.size(); i++){
std::cout << "0x" << std::hex << std::setfill('0') << std::setw(2) << +v[i] << " ";
if(i && (i + 1) % 32 == 0){
std::cout << std::endl;
}
}
}
std::cout << "Total remapped audio size is " << std::dec << remapsize << std::endl;
}
int main()
{
// External data
std::cout << "Raw Audio\n";
for(int i = 0; i < audio_size; i++){
audio_ptr[i] = i;
std::cout << "0x" << std::hex << std::setfill('0') << std::setw(2) << +audio_ptr[i] << " ";
if(i && (i + 1) % 32 == 0){
std::cout << std::endl;
}
}
std::cout << "Total raw audio size is " << std::dec << audio_size << std::endl;
Demux(8);
//Demux(16);
//Demux(24);
//Demux(32);
}
You're actually pretty close. But the code is confusing: specifically the variable names and what actual values they represent. As a result, you appear to be just guessing the math. So let's go back to square one and determine what exactly it is we need to do, and the math will very easily fall out of it.
First, just imagine we have one sample covering each of the five channels. This is called an audio frame for that sample. The frame looks like this:
[channel0][channel1][channel2][channel3][channel4]
The width of a sample in one channel is called byterate in your code, but I don't like that name. I'm going to call it bytes_per_sample instead. You can easily see the width of the entire frame is this:
int bytes_per_frame = bytes_per_sample * channel_count;
It should be equally obvious that to find the starting offset for channel c within a single frame, you multiply as follows:
int sample_offset_in_frame = bytes_per_sample * c;
That's just about all you need! The last bit is your z loop which covers each byte in a single sample for one channel. I don't know what z is supposed to represent, apart from being a random single-letter identifier you chose, but hey let's just keep it.
Putting all this together, you get the absolute offset of sample s in channel c and then you copy individual bytes out of it:
int sample_offset = bytes_per_frame * s + bytes_per_sample * c;
for (int z = 0; z < bytes_per_sample; ++z) {
audio.push_back(audio_ptr[sample_offset + z]);
}
This does actually assume you're looping over the number of samples, not the number of bytes in your channel. So let's show all the loops for completion sake:
const int bytes_per_sample = bitrate / 8;
const int bytes_per_frame = bytes_per_sample * channel_count;
const int num_samples = audio_size / bytes_per_frame;
for (int c = 0; c < channel_count; ++c)
{
int sample_offset = bytes_per_sample * c;
for (int s = 0; s < num_samples; ++s)
{
for (int z = 0; z < bytes_per_sample; ++z)
{
audio.push_back(audio_ptr[sample_offset + z]);
}
// Skip to next frame
sample_offset += bytes_per_frame;
}
}
You'll see here that I split the math up so that it's doing less multiplications in the loops. This is mostly for readability, but might also help a compiler understand what's happening when it tries to optimize. Concerns over optimization are secondary (and in your case, there are much more expensive worries going on with those vectors and the map)..
The most important thing is you have readable code with reasonable variable names that makes logical sense.
i'm trying to read from a binary file to a char array. When printing array entries, an arbitrary number (newline) and the desired number are being printed. I really cant get my head around this.
The first few bytes of the file are:
00 00 08 03 00 00 EA 60 00 00 00 1C 00 00 00 1C 00 00
My Code:
void MNISTreader::loadImagesAndLabelsToMemory(std::string imagesPath,
std::string labelsPath) {
std::ifstream is(imagesPath.c_str());
char *data = new char[12];
is.read(data, 12);
std::cout << std::hex << (int)data[2] << std::endl;
delete [] data;
is.close();
}
E.g it prints:
ffffff9b
8
8 is correct. The preceding number changes from execution to execution. And where does this newline come from?
You asked about reading data from a binary file and saving it into a char[] and you showed us this code that you submitted for your question:
void MNISTreader::loadImagesAndLabelsToMemory(std::string imagesPath,
std::string labelsPath) {
std::ifstream is(imagesPath.c_str());
char *data = new char[12];
is.read(data, 12);
std::cout << std::hex << (int)data[2] << std::endl;
delete [] data;
is.close();
}
And you wanted to know:
The preceding number changes from execution to execution. And where does this newline come from?
Before you can actually answer that question you need to know the binary file. That is what is the structure of the file internally. When you are reading data from a binary you have to remember that some program had written data to that file and that data was written in a structured format. It is this format that is unique to each family or file type of binary that is important. Most binaries will usually follow a common pattern such that they would container a header then maybe even sub headers then either clusters, or packets or chunks, etc. or even raw data after the header while some binaries may just be purely raw data. You have to know how the file is structured in memory.
What is the structure of the data?
Is the data type for the first entry into the file a char = 1 byte, int = 4 bytes (32bit system) 8 bytes (64bit system), float = 4bytes, double = 8bytes, etc.
According to your code you have an array of char with a size of 12 and knowing that a char is 1 byte in memory you are asking for 12 bytes. Now the problem here is that you are pulling off 12 consecutive individual bytes in a row and by not knowing the file structure how can you determine if the first byte was an actual char written or an unsigned char, or a int?
Consider these two different binary file structures that are created by C++ structs that contains all the needed data and both are written out to a file in a binary format.
A Generic Header Structure that both File Structures will use.
struct Header {
// Size of Header
std::string filepath;
std::string filename;
unsigned int pathSize;
unsigned int filenameSize;
unsigned int headerSize;
unsigned int dataSizeInBytes;
};
FileA Unique Structure For File A
struct DataA {
float width;
float length;
float height;
float dummy;
}
FileB Unique Structure For File B
struct DataB {
double length;
double width;
}
A File in memory in general would be something like this:
First Few Bytes are the path and file name and stored sizes
This can vary from file to file depending on how many characters
are used for both the file path and file name.
After the strings we do know that the next 4 data types are unsigned
so we know that on a 32bit system it will be 4bytes x 4 = 16 total bytes
For a 64bit system it will be 8bytes x 4 = 32 total bytes.
If we know the system architecture then we can get past this easily enough.
Of these 4 unsigned(s) the first two are for the length of the path and filename. Now these could be the first two read in from the file and not the actual paths. The order of these could be reversed.
It is the next 2 unsigned(s) that are of importance
The next is the full size of the header and can be used to read in and skip over the header.
The next one tells you the size of the data to be pulled in now these could be in chunks with a count of how many chunks because it could be a series of the same data structures but for simplicity I left out chunks and counts and using a single instance structure.
It is here were we can then extract the amount of data in bytes by how many bytes to extract.
Lets consider the two different binary files where we are already past all the header information and we are reading in the bytes to parse. We get to the size of the data in bytes and for FileA we have 4 floats = 16bytes and for FileB we have 2 doubles = 16bytes. So now we know how to call the method to read in x amount of data for a y type of data. Since y is now a type and x is amount of we can say this: y(x) As if y is a built in type and x is a numerical initializer for the default built in type of constructor for this built in type either it be an int, float, double, char, etc.
Now let's say we were reading in either one of these two files but didn't know the data structure and how its information was previously stored to the file and we are seeing by the header that the data size is 16 bytes in memory but we didn't know if it was being stored as either 4 floats = 16 bytes or 2 doubles = 16 bytes. Both structures are 16 bytes but have a different amount of different data types.
The summation of this is that without knowing the file's data structure and knowing how to parse the binary does become an X/Y Problem
Now let's assume that you do know the file structure to try and answer your question from above you can try this little program and to check out some results:
#include <string>
#include <iostream>
int main() {
// Using Two Strings
std::string imagesPath("ImagesPath\\");
std::string labelsPath("LabelsPath\\");
// Concat of Two Strings
std::string full = imagesPath + labelsPath;
// Display Of Both
std::cout << full << std::endl;
// Data Type Pointers
char* cData = nullptr;
cData = new char[12];
unsigned char* ucData = nullptr;
ucData = new unsigned char[12];
// Loop To Set Both Pointers To The String
unsigned n = 0;
for (; n < 12; ++n) {
cData[n] = full.at(n);
ucData[n] = full.at(n);
}
// Display Of Both Strings By Character and Unsigned Character
n = 0;
for (; n < 12; ++n) {
std::cout << cData[n];
}
std::cout << std::endl;
n = 0;
for (; n < 12; ++n) {
std::cout << ucData[n];
}
std::cout << std::endl;
// Both Yeilds Same Result
// Okay lets clear out the memory of these pointers and then reuse them.
delete[] cData;
delete[] ucData;
cData = nullptr;
ucData = nullptr;
// Create Two Data Structurs 1 For Each Different File
struct A {
float length;
float width;
float height;
float padding;
};
struct B {
double length;
double width;
};
// Constants For Our Data Structure Sizes
const unsigned sizeOfA = sizeof(A);
const unsigned sizeOfB = sizeof(B);
// Create And Populate An Instance Of Each
A a;
a.length = 3.0f;
a.width = 3.0f;
a.height = 3.0f;
a.padding = 0.0f;
B b;
b.length = 5.0;
b.width = 5.0;
// Lets First Use The `Char[]` Method for each struct and print them
// but we need 16 bytes instead of `12` from your problem
char *aData = nullptr; // FileA
char *bData = nullptr; // FileB
aData = new char[16];
bData = new char[16];
// Since A has 4 floats we know that each float is 4 and 16 / 4 = 4
aData[0] = a.length;
aData[4] = a.width;
aData[8] = a.height;
aData[12] = a.padding;
// Print Out Result but by individual bytes without casting for A
// Don't worry about the compiler warnings and build and run with the
// warning and compare the differences in what is shown on the screen
// between A & B.
n = 0;
for (; n < 16; ++n) {
std::cout << aData[n] << " ";
}
std::cout << std::endl;
// Since B has 2 doubles weknow that each double is 8 and 16 / 8 = 2
bData[0] = b.length;
bData[8] = b.width;
// Print out Result but by individual bytes without casting for B
n = 0;
for (; n < 16; ++n) {
std::cout << bData[n] << " ";
}
std::cout << std::endl;
// Let's Print Out Both Again But By Casting To Their Approriate Types
n = 0;
for (; n < 4; ++n) {
std::cout << reinterpret_cast<float*>(aData[n]) << " ";
}
std::cout << std::endl;
n = 0;
for (; n < 2; ++n) {
std::cout << reinterpret_cast<double*>(bData[n]) << " ";
}
std::cout << std::endl;
// Clean Up Memory
delete[] aData;
delete[] bData;
aData = nullptr;
bData = nullptr;
// Even By Knowing The Appropriate Sizes We Can See A Difference
// In The Stored Data Types. We Can Now Do The Same As Above
// But With Unsigned Char & See If It Makes A Difference.
unsigned char *ucAData = nullptr;
unsigned char *ucBData = nullptr;
ucAData = new unsigned char[16];
ucBData = new unsigned char[16];
// Since A has 4 floats we know that each float is 4 and 16 / 4 = 4
ucAData[0] = a.length;
ucAData[4] = a.width;
ucAData[8] = a.height;
ucAData[12] = a.padding;
// Print Out Result but by individual bytes without casting for A
// Don't worry about the compiler warnings and build and run with the
// warning and compare the differences in what is shown on the screen
// between A & B.
n = 0;
for (; n < 16; ++n) {
std::cout << ucAData[n] << " ";
}
std::cout << std::endl;
// Since B has 2 doubles weknow that each double is 8 and 16 / 8 = 2
ucBData[0] = b.length;
ucBData[8] = b.width;
// Print out Result but by individual bytes without casting for B
n = 0;
for (; n < 16; ++n) {
std::cout << ucBData[n] << " ";
}
std::cout << std::endl;
// Let's Print Out Both Again But By Casting To Their Approriate Types
n = 0;
for (; n < 4; ++n) {
std::cout << reinterpret_cast<float*>(ucAData[n]) << " ";
}
std::cout << std::endl;
n = 0;
for (; n < 2; ++n) {
std::cout << reinterpret_cast<double*>(ucBData[n]) << " ";
}
std::cout << std::endl;
// Clean Up Memory
delete[] ucAData;
delete[] ucBData;
ucAData = nullptr;
ucBData = nullptr;
// So Even Changing From `char` to an `unsigned char` doesn't help here even
// with reinterpret casting. Because These 2 Files Are Different From One Another.
// They have a unique signature. Now a family of files where a specific application
// saves its data to a binary will all follow the same structure. Without knowing
// the structure of the binary file and knowing how much data to pull in and the big key
// word here is `what type` of data you are reading in and by how much. This becomes an (X/Y) Problem.
// This is the hard part about parsing binaries, you need to know the file structure.
char c = ' ';
std::cin.get(c);
return 0;
}
After running the short program above don't worry about what each value being displayed to the screen is; just look at the patterns that are there for the comparison of the two different file structures. This is just to show that a struct of floats that is 16 bytes wide is not the same as a struct of doubles that is also 16 bytes wide. So when we go back to your problem and you are reading in 12 individual consecutive bytes the question then becomes what does these first 12 bytes represent? Is it 3 ints or 3 unsigned ints if on 32bit machine or 2 ints or 2 unsigned ints on a 64bit machine, or 3 floats, or is a combination such as 2 doubles and 1 float? What is the current data structure of the binary file you are reading in?
Edit In my little program that I wrote; I did forget to try or add in the << std::hex << in the print out statements they can be added in as well were each printing of the index pointers are used but there is no need to do so because the output to the display is the same exact thing as this only shows or expresses visually the difference of the two data structures in memory and what their patterns look like.
I have string array which has 8 field. 8 bits per field give me 64 bits of memory in single string this type. I want to create rotate function for this string array. For simple for string 20 (in HEX) function RotateLeft(string, 1) gives me 40, like in rotate. Max rotate value is 64, then function must return sent string (RotateLeft(string, 64) == string). I need rotate left and right. I try to create something like this:
std::string RotateLeft(std::string Message, unsigned int Value){
std::string Output;
unsigned int MessageLength = Message.length(), Bit;
int FirstPointer, SecondPointer;
unsigned char Char;
for (int a = 0; a < MessageLength; a++){
FirstPointer = a - ceil(Value / 8.);
if (FirstPointer < 0){
FirstPointer += MessageLength;
}
SecondPointer = (FirstPointer + 1) % MessageLength;
Bit = Value % 8;
Char = (Message[FirstPointer] << Bit) | (Message[SecondPointer] & (unsigned int)(pow(2, Bit) - 1));
Output += Char;
}
return Output;
}
It working for value 64, but not for other values. For simple for HEX string (function get string elements as decimal values but it is for better reading) when I sent this value: 243F6A8885A308D3 and execute RotateLeft(string, 1) I received A6497ED4110B4611. When I check this in Windows Calc it now valid value. Anyone can help me and show where I do mistake?
I am not sure if I correctly understand what you want to do, but somehow to me it looks like you are doing something rather simple in a complicated way. When shifting numbers, I would not put them in a string. However, once you have it as a string, you could do this:
std::string rotate(std::string in,int rot){
long long int number;
std::stringstream instream(in);
instream >> number;
for (int i=0;i<rot;i++){number *= 2;}
std::stringstream outstream;
outstream << number;
return outstream.str();
}
...with a small modification to allow also negative shifts.
You have a hex value in a string, you want to rotate it as if it was actually a number. You could just change it to an actual number, then back into a string:
// Some example variables.
uint64_t x, shift = 2;
string in = "fffefffe", out;
// Get the string as a number
std::stringstream ss;
ss << std::hex << in;
ss >> x;
// Shift the number
x = x << shift;
// Convert the number back into a hex string
std::ostringstream ss2;
ss2 << std::hex << x;
// Get your output.
out = ss2.str();
Here is a live example.
I want to convert the integer (whose maximum value can reach to 99999999) in to BCD and store in to array of 4 characters.
Like for example:
Input is : 12345 (Integer)
Output should be = "00012345" in BCD which is stored in to array of 4 characters.
Here 0x00 0x01 0x23 0x45 stored in BCD format.
I tried in the below manner but didnt work
int decNum = 12345;
long aux;
aux = (long)decNum;
cout<<" aux = "<<aux<<endl;
char* str = (char*)& aux;
char output[4];
int len = 0;
int i = 3;
while (len < 8)
{
cout <<"str: " << len << " " << (int)str[len] << endl;
unsigned char temp = str[len]%10;
len++;
cout <<"str: " << len << " " << (int)str[len] << endl;
output[i] = ((str[len]) << 4) | temp;
i--;
len++;
}
Any help will be appreciated
str points actually to a long (probably 4 bytes), but the iteration accesses 8 bytes.
The operation str[len]%10 looks as if you are expecting digits, but there is only binary data. In addition I suspect that i gets negative.
First, don't use C-style casts (like (long)a or (char*)). They are a bad smell. Instead, learn and use C++ style casts (like static_cast<long>(a)), because they point out where you are doing things that are dangeruos, instead of just silently working and causing undefined behavior.
char* str = (char*)& aux; gives you a pointer to the bytes of aux -- it is actually char* str = reinterpret_cast<char*>(&aux);. It does not give you a traditional string with digits in it. sizeof(char) is 1, sizeof(long) is almost certainly 4, so there are only 4 valid bytes in your aux variable. You proceed to try to read 8 of them.
I doubt this is doing what you want it to do. If you want to print out a number into a string, you will have to run actual code, not just reinterpret bits in memory.
std::string s; std::stringstream ss; ss << aux; ss >> s; will create a std::string with the base-10 digits of aux in it.
Then you can look at the characters in s to build your BCD.
This is far from the fastest method, but it at least is close to your original approach.
First of all sorry about the C code, I was deceived since this started as a C questions, porting to C++ should not really be such a big deal.
If you really want it to be in a char array I'll do something like following code, I find useful to still leave the result in a little endian format so I can just cast it to an int for printing out, however that is not strictly necessary:
#include <stdio.h>
typedef struct
{
char value[4];
} BCD_Number;
BCD_Number bin2bcd(int bin_number);
int main(int args, char **argv)
{
BCD_Number bcd_result;
bcd_result = bin2bcd(12345678);
/* Assuming an int is 4 bytes */
printf("result=0x%08x\n", *((int *)bcd_result.value));
}
BCD_Number bin2bcd(int bin_number)
{
BCD_Number bcd_number;
for(int i = 0; i < sizeof(bcd_number.value); i++)
{
bcd_number.value[i] = bin_number % 10;
bin_number /= 10;
bcd_number.value[i] |= bin_number % 10 << 4;
bin_number /= 10;
}
return bcd_number;
}
I have an int that I want to store as a binary string representation. How can this be done?
Try this:
#include <bitset>
#include <iostream>
int main()
{
std::bitset<32> x(23456);
std::cout << x << "\n";
// If you don't want a variable just create a temporary.
std::cout << std::bitset<32>(23456) << "\n";
}
I have an int that I want to first convert to a binary number.
What exactly does that mean? There is no type "binary number". Well, an int is already represented in binary form internally unless you're using a very strange computer, but that's an implementation detail -- conceptually, it is just an integral number.
Each time you print a number to the screen, it must be converted to a string of characters. It just so happens that most I/O systems chose a decimal representation for this process so that humans have an easier time. But there is nothing inherently decimal about int.
Anyway, to generate a base b representation of an integral number x, simply follow this algorithm:
initialize s with the empty string
m = x % b
x = x / b
Convert m into a digit, d.
Append d on s.
If x is not zero, goto step 2.
Reverse s
Step 4 is easy if b <= 10 and your computer uses a character encoding where the digits 0-9 are contiguous, because then it's simply d = '0' + m. Otherwise, you need a lookup table.
Steps 5 and 7 can be simplified to append d on the left of s if you know ahead of time how much space you will need and start from the right end in the string.
In the case of b == 2 (e.g. binary representation), step 2 can be simplified to m = x & 1, and step 3 can be simplified to x = x >> 1.
Solution with reverse:
#include <string>
#include <algorithm>
std::string binary(unsigned x)
{
std::string s;
do
{
s.push_back('0' + (x & 1));
} while (x >>= 1);
std::reverse(s.begin(), s.end());
return s;
}
Solution without reverse:
#include <string>
std::string binary(unsigned x)
{
// Warning: this breaks for numbers with more than 64 bits
char buffer[64];
char* p = buffer + 64;
do
{
*--p = '0' + (x & 1);
} while (x >>= 1);
return std::string(p, buffer + 64);
}
AND the number with 100000..., then 010000..., 0010000..., etc. Each time, if the result is 0, put a '0' in a char array, otherwise put a '1'.
int numberOfBits = sizeof(int) * 8;
char binary[numberOfBits + 1];
int decimal = 29;
for(int i = 0; i < numberOfBits; ++i) {
if ((decimal & (0x80000000 >> i)) == 0) {
binary[i] = '0';
} else {
binary[i] = '1';
}
}
binary[numberOfBits] = '\0';
string binaryString(binary);
http://www.phanderson.com/printer/bin_disp.html is a good example.
The basic principle of a simple approach:
Loop until the # is 0
& (bitwise and) the # with 1. Print the result (1 or 0) to the end of string buffer.
Shift the # by 1 bit using >>=.
Repeat loop
Print reversed string buffer
To avoid reversing the string or needing to limit yourself to #s fitting the buffer string length, you can:
Compute ceiling(log2(N)) - say L
Compute mask = 2^L
Loop until mask == 0:
& (bitwise and) the mask with the #. Print the result (1 or 0).
number &= (mask-1)
mask >>= 1 (divide by 2)
I assume this is related to your other question on extensible hashing.
First define some mnemonics for your bits:
const int FIRST_BIT = 0x1;
const int SECOND_BIT = 0x2;
const int THIRD_BIT = 0x4;
Then you have your number you want to convert to a bit string:
int x = someValue;
You can check if a bit is set by using the logical & operator.
if(x & FIRST_BIT)
{
// The first bit is set.
}
And you can keep an std::string and you add 1 to that string if a bit is set, and you add 0 if the bit is not set. Depending on what order you want the string in you can start with the last bit and move to the first or just first to last.
You can refactor this into a loop and using it for arbitrarily sized numbers by calculating the mnemonic bits above using current_bit_value<<=1 after each iteration.
There isn't a direct function, you can just walk along the bits of the int (hint see >> ) and insert a '1' or '0' in the string.
Sounds like a standard interview / homework type question
Use sprintf function to store the formatted output in the string variable, instead of printf for directly printing. Note, however, that these functions only work with C strings, and not C++ strings.
There's a small header only library you can use for this here.
Example:
std::cout << ConvertInteger<Uint32>::ToBinaryString(21);
// Displays "10101"
auto x = ConvertInteger<Int8>::ToBinaryString(21, true);
std::cout << x << "\n"; // displays "00010101"
auto x = ConvertInteger<Uint8>::ToBinaryString(21, true, "0b");
std::cout << x << "\n"; // displays "0b00010101"
Solution without reverse, no additional copy, and with 0-padding:
#include <iostream>
#include <string>
template <short WIDTH>
std::string binary( unsigned x )
{
std::string buffer( WIDTH, '0' );
char *p = &buffer[ WIDTH ];
do {
--p;
if (x & 1) *p = '1';
}
while (x >>= 1);
return buffer;
}
int main()
{
std::cout << "'" << binary<32>(0xf0f0f0f0) << "'" << std::endl;
return 0;
}
This is my best implementation of converting integers(any type) to a std::string. You can remove the template if you are only going to use it for a single integer type. To the best of my knowledge , I think there is a good balance between safety of C++ and cryptic nature of C. Make sure to include the needed headers.
template<typename T>
std::string bstring(T n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}
Use it like so,
std::cout << bstring<size_t>(371) << '\n';
This is the output in my computer(it differs on every computer),
0000000000000000000000000000000000000000000000000000000101110011
Note that the entire binary string is copied and thus the padded zeros which helps to represent the bit size. So the length of the string is the size of size_t in bits.
Lets try a signed integer(negative number),
std::cout << bstring<signed int>(-1) << '\n';
This is the output in my computer(as stated , it differs on every computer),
11111111111111111111111111111111
Note that now the string is smaller , this proves that signed int consumes less space than size_t. As you can see my computer uses the 2's complement method to represent signed integers (negative numbers). You can now see why unsigned short(-1) > signed int(1)
Here is a version made just for signed integers to make this function without templates , i.e use this if you only intend to convert signed integers to string.
std::string bstring(int n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}