I am trying to create a bitmaped data in , here is the code I used but I am not able to figure the right logic. Here's my code
bool a=1;
bool b=0;
bool c=1;
bool d=0;
uint8_t output = a|b|c|d;
printf("outupt = %X", output);
I want my output to be "1010" which is equivalent to hex "0x0A". How do I do it ??
The bitwise or operator ors the bits in each position. The result of a|b|c|d will be 1 because you're bitwise oring 0 and 1 in the least significant position.
You can shift (<<) the bits to the correct positions like this:
uint8_t output = a << 3 | b << 2 | c << 1 | d;
This will result in
00001000 (a << 3)
00000000 (b << 2)
00000010 (c << 1)
| 00000000 (d; d << 0)
--------
00001010 (output)
Strictly speaking, the calculation happens with ints and the intermediate results have more leading zeroes, but in this case we do not need to care about that.
If you're interested in setting/clearing/accessing very simply specific bits, you could consider std::bitset:
bitset<8> s; // bit set of 8 bits
s[3]=a; // access individual bits, as if it was an array
s[2]=b;
s[1]=c;
s[0]=d; // the first bit is the least significant bit
cout << s <<endl; // streams the bitset as a string of '0' and '1'
cout << "0x"<< hex << s.to_ulong()<<endl; // convert the bitset to unsigned long
cout << s[3] <<endl; // access a specific bit
cout << "Number of bits set: " << s.count()<<endl;
Online demo
The advantage is that the code is easier to read and maintain, especially if you're modifying bitmapped data. Because setting specific bits using binary arithmetics with a combination of << and | operators as explained by Anttii is a vorkable solution. But clearing specific bits in an existing bitmap, by combining the use of << and ~ (to create a bit mask) with & is a little more tricky.
Another advantage is that you can easily manage large bitsets of hundreds of bits, much larger than the largest built-in type unsigned long long (although doing so will not allow you to convert as easily to an unsigned long or an unsigned long long: you'll have to go via a string).
C only
I would use bitfields. I know that they are not portable, but for the particular embedded hardware (especially uCs) it is well defined.
#include <string.h>
#include <stdio.h>
#include <stdbool.h>
typedef union
{
struct
{
bool a:1;
bool b:1;
bool c:1;
bool d:1;
bool e:1;
bool f:1;
};
unsigned char byte;
}mydata;
int main(void)
{
mydata d;
d.a=1;
d.b=0;
d.c=1;
d.d=0;
printf("outupt = %hhX", d.byte);
}
Related
Sorry for my bad English. I need to build app which converts hex to rgb. I have file U1.txt with content inside:
2 3
008000
FF0000
FFFFFF
FFFF00
FF0000
FFFF00
And my codeblocks app:
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int a;
int b;
string color;
ifstream data("U1.txt");
ofstream result("U1result.txt");
data >> a;
data >> b;
for (int i = 0; i < a * b; i++) {
data >> color;
cout << color[0] * 16 + color[1] << endl;
}
data.close();
result.close();
return 0;
}
This gives me 816. But it should be 0. I think color[0] is not an integer, but a char and it multiplies by ASCII number.. I've tried many ways with atoi, c_str() and it not working. P.S do not suggest stoi(), because I need to do this homework with older C++. Thanks in advance and have a good day ;)
You can directly store the hexadecimal values in an int with std::hex.
int b;
ifstream data("U1.txt");
data >> std::hex >> b;
Since those encodings use 24 bits, you have to start out with an integer type that holds at least 24 bits. And for this kind of packing and unpacking, it really ought to be unsigned, so you don't get tangled up in sign bits. That means using std::uint_least32_t, which is the smallest unsigned type that can hold at least 32 bits. (Yes, 24 would fit better, but there is no least24 type; 32 is the best you can do).
If your compiler doesn't provide those fixed-width types (std::uint_least32_t), you can use unsigned long. It's required to be at least 32 bits wide. It could be larger, and the reason for using std::uint_least32_t is that your compiler might have, for example, a 32-bit integer, in which case unsigned int would be 32 bits wide. But you can't count on that, so either use the fixed-width type or use unsigned long to ensure that you have enough bits.
Since the character inputs are encoded in hexadecimal, you need to tell the input system to interpret them as hex values. So:
std::uint_least32_t value;
data >> std::hex >> value;
Now you've got the value in the low 24 bits of value. You need to pick out the individual R, G, and B parts of that value. That's straightforward. To get the low 8 bits, just mask out the higher ones:
std::cout << (value & 0xFF) << '\n';
To get the next 8 bits, shift and mask:
std::cout << ((value >> 8) & 0xFF) << '\n';
And, naturally, to get the upper 8 bits, shift and mask:
std::cout << ((value >> 16) & 0xFF) << '\n';
A rather unelegant but also working answer is to subtract all your chars by 48 as thats where numbers start in ASCII. This is also the reason why you get 816 as:
48*16+48 = 816
I have a bitset which is very large, say, 10 billion bits.
What I'd like to do is write this to a file. However using .to_string() actually freezes my computer.
What I'd like to do is iterate over the bits and take 64 bits at a time, turn it into a uint64 and then write it to a file.
However I'm not aware how to access different ranges of the bitset. How would I do that? I am new to c++ and wasn't sure how to access the underlying bitset::reference so please provide an example for an answer.
I tried using a pointer but did not get what I expected. Here's an example of what I'm trying so far.
#include <iostream>
#include <bitset>
#include <cstring>
using namespace std;
int main()
{
bitset<50> bit_array(302332342342342323);
cout<<bit_array << "\n";
bitset<50>* p;
p = &bit_array;
p++;
int some_int;
memcpy(&some_int, p , 2);
cout << &bit_array << "\n";
cout << &p << "\n";
cout << some_int << "\n";
return 0;
}
the output
10000110011010100111011101011011010101011010110011
0x7ffe8aa2b090
0x7ffe8aa2b098
17736
The last number seems to change on each run which is not what I expect.
There are a couple of errors in the program. The maximum value bitset<50> can hold is 1125899906842623 and this is much less than what bit_array has been initialized with in the program.
some_int has to be defined as unsigned long and verify if unsigned long has 64 bits on your platform.
After this, test each bit of bit_array in a loop and then do the appropriate bitwise (OR and shift) operations and store the result into some_int.
std::size_t start_bit = 0;
std::size_t end_bit = 64;
for (std::size_t i = start_bit; i < end_bit; i++) {
if (bit_array[i])
some_int |= mask;
mask <<= 1;
}
You can change the values of start_bit and end_bit appropriately as you navigate through the large bitset.
See DEMO.
For accessing ranges of a bitset, you should look at the provided interface. The lack of something like bitset::data() indicates that you should not try to access the underlying data directly. Doing so, even if it had seemed to work, is fragile, hacky, and probably undefined behavior of some sort.
I see two possibilities for converting a massive bitset into more manageable pieces. A fairly straight-forward approach is to just go through bit-by-bit and collect these into an integer of some sort (or write them directly to a file as '0' or '1' if you're not that concerned about file size). Looks like P.W already provided code for this, so I'll skip an example for now.
The second possibility is to use bitwise operators and to_ullong(). The downside of this approach is that it nominally uses auxiliary storage space, specifically two additional bitsets the same size as your original. I say "nominally", though, because a compiler might be clever enough to optimize them away. Might. Maybe not. And you are dealing with sizes over a gigabyte each. Realistically, the bit-by-bit approach is probably the way to go, but I think this example is interesting at a theoretical level.
#include <iostream>
#include <iomanip>
#include <bitset>
#include <cstdint>
using namespace std;
constexpr size_t FULL_SIZE = 120; // Some large number
constexpr size_t CHUNK_SIZE = 64; // Currently the mask assumes 64. Otherwise, this code just
// assumes CHUNK_SIZE is nonzero and at most the number of
// bits in long long (which is at least 64).
int main()
{
// Generate some large bitset. This is just test data, so don't read too much into this.
bitset<FULL_SIZE> bit_array(302332342342342323);
bit_array |= bit_array << (FULL_SIZE/2);
cout << "Source: " << bit_array << "\n";
// The mask avoids overflow in to_ullong().
// The mask should be have exactly its CHUNK_SIZE low-order bits set.
// As long as we're dealing with 64-bit chunks, there's a handy constant to handle this.
constexpr bitset<FULL_SIZE> mask64(UINT64_MAX);
cout << "Mask: " << mask64 << "\n";
// Extract chunks.
const size_t num_chunks = (FULL_SIZE + CHUNK_SIZE - 1)/CHUNK_SIZE; // Round up.
for ( size_t i = 0; i < num_chunks; ++i ) {
// Extract the next CHUNK_SIZE bits, then convert to an integer.
const bitset<FULL_SIZE> chunk_set{(bit_array >> (CHUNK_SIZE * i)) & mask64};
unsigned long long chunk_val = chunk_set.to_ullong();
// NOTE: as long as CHUNK_SIZE <= 64, chunk_val can be converted safely to the desired uint64_t.
cout << "Chunk " << dec << i << ": 0x" << hex << setfill('0') << setw(16) << chunk_val << "\n";
}
return 0;
}
The output:
Source: 010000110010000110011010100111011101011011010101011010110011010000110010000110011010100111011101011011010101011010110011
Mask: 000000000000000000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111
Chunk 0: 0x343219a9dd6d56b3
Chunk 1: 0x0043219a9dd6d56b
I need to covert hexadecimal string to binary then pass the bits into different variables.
For example, my input is:
std::string hex = "E136";
How do I convert the string into binary output 1110 0001 0011 0110?
After that I need to pass the bit 0 to variable A, bits 1-9 to variable B and bits 10-15 to variable C.
Thanks in advance
How do I convert the string [...]?
Start with result value of null, then for each character (starting at first, indicating most significant one) determine its value (in range of [0:15]), multiply the so far received result by 16 and add the current value to. For your given example, this will result in
(((0 * 16 + v('E')) * 16 + v('1')) * 16 + v('3')) + v('6')
There are standard library functions doing the stuff for you, such as std::strtoul:
char* end;
unsigned long value = strtoul(hex.c_str(), &end, 16);
// ^^ base!
The end pointer useful to check if you have read the entire string:
if(*char == 0)
{
// end of string reached
}
else
{
// some part of the string was left, you might consider this
// as error (could occur if e. g. "f10s12" was passed, then
// end would point to the 's')
}
If you don't care for end checking, you can just pass nullptr instead.
Don't convert back to a string afterwards, you can get the required values by masking (&) and bitshifting (>>), e. g getting bits [1-9]:
uint32_t b = value >> 1 & 0x1ffU;
Working on integrals is much more efficient than working on strings. Only when you want to print out the final result, then convert back to string (if using a std::ostream, operator<< already does the work for you...).
While playing with this sample, I realized that I gave a wrong recommendation:
std::setbase(2) does not work by standard. Ouch! (SO: Why doesn't std::setbase(2) switch to binary output?)
For conversion of numbers to string with binary digits, something else must be used. I made this small sample. Though, the separation of bits is considered as well, my main focus was on output with different bases (and IMHO worth another answer):
#include <algorithm>
#include <iomanip>
#include <iostream>
#include <string>
std::string bits(unsigned value, unsigned w)
{
std::string text;
for (unsigned i = 0; i < w || value; ++i) {
text += '0' + (value & 1); // bit -> character '0' or '1'
value >>= 1; // shift right one bit
}
// text is right to left -> must be reversed
std::reverse(text.begin(), text.end());
// done
return text;
}
void print(const char *name, unsigned value)
{
std::cout
<< name << ": "
// decimal output
<< std::setbase(10) << std::setw(5) << value
<< " = "
// binary output
#if 0 // OLD, WRONG:
// std::setbase(2) is not supported by standard - Ouch!
<< "0b" << std::setw(16) << std::setfill('0') << std::setbase(2) << value
#else // NEW:
<< "0b" << bits(value, 16)
#endif // 0
<< " = "
// hexadecimal output
<< "0x" << std::setw(4) << std::setfill('0') << std::setbase(16) << value
<< '\n';
}
int main()
{
std::string hex = "E136";
unsigned value = strtoul(hex.c_str(), nullptr, 16);
print("hex", value);
// bit 0 -> a
unsigned a = value & 0x0001;
// bit 1 ... 9 -> b
unsigned b = (value & 0x03FE) >> 1;
// bit 10 ... 15 -> c
unsigned c = (value & 0xFC00) >> 10;
// report
print(" a ", a);
print(" b ", b);
print(" c ", c);
// done
return 0;
}
Output:
hex: 57654 = 0b1110000100110110 = 0xe136
a : 00000 = 0b0000000000000000 = 0x0000
b : 00155 = 0b0000000010011011 = 0x009b
c : 00056 = 0b0000000000111000 = 0x0038
Live Demo on coliru
Concerning, the bit operations:
binary bitwise and operator (&) is used to set all unintended bits to 0. The second value can be understood as mask. It would be more obvious if I had used binary numbers but this is not supported in C++. Hex codes do nearly as well as a hex digit represents always the same pattern of 4 bits. (as 16 = 24) After some time of practice, you usually learn to "see" the bits in the hex code.
About the right shift (>>), I was not quite sure. OP didn't require that bits have to be moved somewhere – only that they had to be separated into distinct variables. So, these right-shift's might be obsolete.
So, this question which seemed to be trivial leaded to a surprising enlightment (for me).
Which is the most effective way to convert 16 bit value to 32 bit value in C++ by padding extra 2 bytes with 0 (i.e. without changing value but only change in size of variable).
Include the <cstdint> header and initialize your 32-bit integer with your 16-bit value. Be sure to pay attention to your signs. In the example below I'm converting all integer values (signed or not) to an unsigned 32-bit integer.
Example Conversion
#include <iostream>
#include <cstdint>
#include <iomanip>
using namespace std;
void dumpVar(const uint32_t value)
{
cout << setfill('0') << setw(8) << hex << value << endl;
}
int main()
{
uint16_t test1 = 0xffff;
int16_t test2 = 32767;
uint16_t test3 = 0xf33e;
int16_t test4 = -32768;
dumpVar(test1);
dumpVar(test2);
dumpVar(test3);
dumpVar(test4);
return 0;
}
Sample Output
Notice how negative numbers aren't zero-padded like you might expect. This is just a function of the sign bit.
0000ffff
00007fff
0000f33e
ffff8000
C and C++ handle this sort of operation automatically.
For example:
short small_number = 0xbeef;
int large_number = small_number;
// large_number is now 0x0000beef
I have the contents of a file assigned into a string object. For simplicity the file only has 5 bytes, which is the size of 1 integer plus another byte.
What I want to do is get the first four bytes of the string object and somehow store it into a valid integer variable by the program.
Then the program will do various operations on the integer, changing it.
Afterward I want the changed integer stored back into the first four bytes of the string object.
Could anyone tell me I could achieve this? I would prefer to stick with the standard C++ library exclusively for this purpose. Thanks in advance for any help.
The following code snippet should illustrate a handful of things. Beware of endian differences. Play around with it. Try to understand what's going on. Add some file operations (binary read & write). The only way to really understand how to do this, is to experiment and create some tests.
#include <iostream>
#include <string>
using namespace std;
int main(int argc, char *argv[]) {
int a = 108554107; // some random number for example sake
char c[4]; // simulate std::string containing a binary int
*((int *) &c[0]) = a; // use casting to copy the data
// reassemble a into b, using indexed bytes from c
int b = 0;
b |= (c[3] & 0xff) << 24;
b |= (c[2] & 0xff) << 16;
b |= (c[1] & 0xff) << 8;
b |= c[0] & 0xff;
// show that all three are equivalent
cout << "a: " << a << " b: " << b
<< " c: " << *((int *) &c[0]) << endl;
return 0;
}
If you are reading into std::string from that file any zero byte would signal end of the string, so you might end up with a string that is shorter then 5 bytes. Take a look here for how to do binary I/O with C++ streams.