Struct to bits c++ - c++

I am learning C++ and I wonder if it is possible to decompose a structure object into a sequence of bits?
// The task is this! I have a structure
struct test {
// It contains an array
private:
int arr [8];
public:
void init () {
for (int i = 0; i <8; i ++) {
arr [i] = 5;
}
}
};
int main () {
// at some point this array is initialized
test h;
h.init ();
// without referring to the arr field and its elements, we must convert the structure to this format
// we know that int is stored there, and these are 32 bits -> 00000000 00000000 00000000 00000101. 00000000 00000000 00000000 00000101. - and there are 8 such pieces by number
// elements in the array
return -1;
}
Well, we know the size of the array too. We need to convert the structure object to a sequence of bits:
00000000000000000000000000000101000000000000000000000000000001010000000000000000000000000000010100000000000000000000000000000101000000000000000000000000000001010000000000000000000000000000010100000000000000000000000000000000010100000000000000000000000000000101

The standard answer for converting number into bit strings is to you std::bitset. I will use a more low level approach. And use a bit mask and & operation to mask out single bits and then assign the corresdponding characters to the resulting string.
Masking woks with the bit and operator and on the locical AND operation
Bit Mask AND
0 0 0
0 1 0
1 0 0
1 1 1
You see, 0 and 1 is 0. And 1 and 1 is 1.
That allows us to access a bit in a byte.
Byte: 10101010
Mask: 00001111
--------------
00001010
And this mecahnism we will use.
But, I cannot imagine that this is homework, because of the dirty reintepret_cast that would be needed, by accessing the struct from outside.
Anyway. Let me present this solution to you.
I find it utmost ugly.
#include <iostream>
#include <bitset>
// The task is this! I have a structure
struct test {
// It contains an array
private:
int arr[8];
public:
void init() {
for (int i = 0; i < 8; i++) {
arr[i] = 5;
}
}
};
// Convert an int to a string with the bit representaion of the int
std::string intToBits(int value) {
// Here we will store the result
std::string result{};
// We want to mask the bit from MSB to LSB
unsigned int mask = 1<<31;
// Now we will work on 4 bytes with 8bits each
for (unsigned int byteNumber = 0; byteNumber < 4; ++byteNumber) {
for (unsigned int bitNumber = 0; bitNumber < 8; ++bitNumber) {
// Mask out bit and store the resulting 1 or 0 in the string
result += (value & mask) ? '1' : '0';
// Next mask
mask >>= 1;
}
// Add a space between bytes
result += ' ';
}
// At the end, we do want to have a point
result.back() = '.';
return result;
}
int main() {
// At some point this array is initialized
test h;
h.init();
// Now do dirty, ugly, and none compliant type cast
int* test = reinterpret_cast<int*>(&h);
// Convert all bytes and show result
for (unsigned int k = 0; k < 8; ++k)
std::cout << intToBits(test[k]) << ' ';
return 0;
}

Related

Byte array and bitwise operator

I am trying to get to read individual bits of a byte array. I am basically iterating through the byte array and want to tell whether each individual bit is 0 or 1.
My problem is, I am unable to differentiate between a 0 and 1 bit. The code is always reading each bit as a 1.
This is my code:
const unsigned char bitmap[] = {
0x00,0xFF,0xFF,0x00,0x00,0x00,
0x00,0x00,0xFF,0xFF,0xFF,0xFF,
0xFF,0xFF,0xFF,0xFF
};
void drawBitmap(Framebuffer *fb) {
uint8_t x = 1;
for (int i = 0; i < sizeof(bitmap); ++i) {
for (int p = 0; p < 8; ++p) {
if ((bitmap[i] >> p) & 1) { // If bit
fb->drawPixel(x, 1); // **RIGHT HERE** --> I AM ALWAYS GETTING THIS AS TRUE
}
x++;
}
}
}
Note that there are some bytes that should be all zeroes (0x00). I am assuming by default these are bytes (8 bits), right? Any ideas why am I unable to differentiate between a 1 and a 0?
Note: Here's the whole code... I am trying to use this library: https://github.com/tibounise/SSD1306-AVR with an atmega328P... This just doesn't make any sensse. Whenever I use "fb->drawPixel(x, 1);" on it's own it works fine, but on that particular function I always get a "1" (a pixel).
#define F_CPU 14745600
#include <stdint.h>
#include <stdio.h>
#include <math.h>
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
#include <util/delay.h>
//#include "SSD1306.h"
#include "Framebuffer.h"
const unsigned char bitmap[] = {
0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0xFF,0xFF,0xFF,0xFF,
0xFF,0xFF,0xFF,0xFF
};
void drawBitmap(Framebuffer *fb) {
uint8_t x = 1;
int z = 0;
for (int i = 0; i < sizeof(bitmap); ++i) {
for (int p = 0; p < 8; ++p) {
if ((bitmap[i] >> p) & 1) { // If bit
fb->drawPixel(x,1);
}
x++;
}
}
}
int main(void) {
//const uint8_t *bitmap;
//bitmap = &bitmap1;
Framebuffer fb;
Framebuffer *FB;
//Pointer to the class
FB = &fb;
//fb.drawPixel(5, 5);
drawBitmap(FB);
fb.show();
//delay_ms(1000);
return 0;
}
Any ideas?
Thanks in advance.
Your code seems okay. However your sample data is in bytes, not bits.
If you are working with a 1-bit bitmap, there are 8-bits per pixel. Each bit is 0 or 1 (black or white). Each set of 8-bits is compacted in to byte:
0xFF contains 8-bits: "11111111"
0x00 contains 8-bits: "00000000"
An actual bitmap may have the following sequence:
"1011 0001"
Try the same code with a more realistic data:
const unsigned char bitmap[] =
{
1 | 0 | 4 | 8 | 0 | 0 | 0 | 128,//1011 0001 <- first 8-bits
0xFF, //<- second set of 8-bits
0x00, //...
};
int main(void)
{
for(int i = 0; i < sizeof(bitmap); ++i)
{
printf("%3d ", bitmap[i]);
for(int p = 0; p < 8; ++p)
printf(((bitmap[i] >> p) & 1) ? "1" : "0");
printf("\n");
}
return 0;
}
Result:
141 10110001
255 11111111
0 00000000

Accessing 8-bit data as 7-bit

I have an array of 100 uint8_t's, which is to be treated as a stream of 800 bits, and dealt with 7 bits at a time. So in other words, if the first element of the 8-bit array holds 0b11001100 and the second holds ob11110000 then when I come to read it in 7-bit format, the first element of the 7-bit array would be 0b1100110 and the second would be 0b0111100 with the remaining 2 bits being held in the 3rd.
The first thing I tried was a union...
struct uint7_t {
uint8_t i1:7;
};
union uint7_8_t {
uint8_t u8[100];
uint7_t u7[115];
};
but of course everything's byte aligned and I essentially end up simply loosing the 8th bit of each element.
Does anyone have any idea's on how I can go about doing this?
Just to be clear, this is something of a visual representation of the result of the union:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
0xxxxxxx 0xxxxxxx 0xxxxxxx 0xxxxxxx 32 bits of 7-bit data.
And this represents what it is that I want to do instead:
xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 32 bits of 8 bit data
xxxxxxx xxxxxxx xxxxxxx xxxxxxx xxxx 32 bits of 7-bit data.
I'm aware the last bits may be padded but that's fine, I just want someway of accessing each byte 7 bits at a time without losing any of the 800 bits. So far the only way I can think of is lots of bit shifting, which of course would work but I'm sure there's a cleaner way of going about it(?)
Thanks in advance for any answers.
Not sure what you mean by "cleaner". Generally people who work on this sort of problem regularly consider shifting and masking to be the right primitive tool to use. One can do something like defining a bitstream abstraction with a method to read an arbitrary number of bits off the stream. This abstraction sometimes shows up in compression applications. The internals of the method of course do use shifting and masking.
One fairly clean approach is to write a function which extracts a 7-bit number at any bit index in an array of unsigned char's. Use a division to convert the bit index to a byte index, and modulus to get the bit index within the byte. Then shift and mask. The input bits can span two bytes, so you either have to glue together a 16-bit value before extraction, or do two smaller extractions and or them together to construct the result.
If I were aiming for something moderately performant, I'd likely take one of two approaches:
The first has two state variables saying how many bits to take from the current and next byte. It would use shifting, masking, and bitwise or, to produce the current output (a number between 0 and 127 as an int for example), then the loop would update both state variables via adding and modulus, and would increment the current byte pointers if all bits in the first byte were consumed.
The second approach is to load 56-bits (8 outputs worth of input) into a 64-bit integer and use a fully unrolled structure to extract each of the 8 outputs. Doing this without using unaligned memory reads requires constructing the 64-bit integer piecemeal. (56-bits is special because the starting bit position is byte aligned.)
To go real fast, I might try writing SIMD code in Halide. That's beyond scope here I believe. (And not clear it is going to win much actually.)
Designs which read more than one byte into a integer at a time will likely have to consider processor byte ordering.
Process them in groups of 8 (since 8x7 nicely rounds to something 8bit aligned). Bitwise operators are the order of the day here. Faffing around with the last (upto) 7 numbers is a little faffy, but not impossible. (This code assumes these are unsigned 7 bit integers! Signed conversion would require you to do consider flipping the top bit if bit[6] is 1)
// convert 8 x 7bit ints in one go
void extract8(const uint8_t input[7], uint8_t output[8])
{
output[0] = input[0] & 0x7F;
output[1] = (input[0] >> 7) | ((input[1] << 1) & 0x7F);
output[2] = (input[1] >> 6) | ((input[2] << 2) & 0x7F);
output[3] = (input[2] >> 5) | ((input[3] << 3) & 0x7F);
output[4] = (input[3] >> 4) | ((input[4] << 4) & 0x7F);
output[5] = (input[4] >> 3) | ((input[5] << 5) & 0x7F);
output[6] = (input[5] >> 2) | ((input[6] << 6) & 0x7F);
output[7] = input[6] >> 1;
}
// convert array of 7bit ints to 8bit
void seven_bit_to_8bit(const uint8_t* const input, uint8_t* const output, const size_t count)
{
size_t count8 = count >> 3;
for(size_t i = 0; i < count8; ++i)
{
extract8(input + 7 * i, output + 8 * i);
}
// handle remaining (upto) 7 bytes
const size_t countr = (count % 8);
if(countr)
{
// how many bytes do we need to copy from the input?
size_t remaining_bits = 7 * countr;
if(remaining_bits % 8)
{
// round to next nearest multiple of 8
remaining_bits += (8 - remaining_bits % 8);
}
remaining_bits /= 8;
{
uint8_t in[7] = {0}, out[8] = {0};
for(size_t i = 0; i < remaining_bits; ++i)
{
in[i] = input[count8 * 7 + i];
}
extract8(in, out);
for(size_t i = 0; i < countr; ++i)
{
output[count8 * 8 + i] = in[i];
}
}
}
}
Here is a solution that uses the vector bool specialization. It also uses a similar mechanism to allow access to the seven-bit elements via reference objects.
The member functions allow for the following operations:
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference to second element
v = 99; // change second element via reference
Full program:
#include <vector>
#include <iterator>
#include <iostream>
struct uint7_t {
unsigned int i : 7;
};
struct seven_bit_ref {
size_t begin;
size_t end;
std::vector<bool>& bits;
seven_bit_ref& operator=(const uint7_t& right)
{
auto it{bits.begin()+begin};
for(int mask{1}; mask != 1 << 7; mask <<= 1)
*it++ = right.i & mask;
return *this;
}
operator uint7_t() const
{
uint7_t r{};
auto it{bits.begin() + begin};
for(int i{}; i < 7; ++i)
r.i += *it++ << i;
return r;
}
seven_bit_ref operator*()
{
return *this;
}
void operator++()
{
begin += 7;
end += 7;
}
bool operator!=(const seven_bit_ref& right)
{
return !(begin == right.begin && end == right.end);
}
seven_bit_ref operator=(int val)
{
uint7_t temp{};
temp.i = val;
operator=(temp);
return *this;
}
};
template<typename T>
class Arr;
template<>
class Arr<uint7_t> {
public:
Arr(size_t size) : bits(size * 7, false) {}
seven_bit_ref operator[](size_t index)
{
return {index * 7, index * 7 + 7, bits};
}
size_t size()
{
return bits.size() / 7;
}
void push_back(uint7_t val)
{
for(int mask{1}; mask != 1 << 7; mask <<= 1){
bits.push_back(val.i & mask);
}
}
seven_bit_ref begin()
{
return {0, 7, bits};
}
seven_bit_ref end()
{
return {size() * 7, size() * 7 + 7, bits};
}
std::vector<bool> bits;
};
std::ostream& operator<<(std::ostream& os, uint7_t val)
{
os << val.i;
return os;
}
int main()
{
uint7_t x{5}; // simple value
Arr<uint7_t> arr(10); // array of size 10
arr[0] = x; // set element
uint7_t y = arr[0]; // get element
arr.push_back(uint7_t{9}); // add element
arr.push_back(x); //
std::cout << "Array size is "
<< arr.size() << '\n'; // get size
for(auto&& i : arr)
std::cout << i << '\n'; // range-for to read values
int z{50};
for(auto&& i : arr)
i = z++; // range-for to change values
auto&& v = arr[1]; // get reference
v = 99; // change via reference
std::cout << "\nAfter changes:\n";
for(auto&& i : arr)
std::cout << i << '\n';
}
The following code works as you have asked for it, but first the output and live example on ideone.
Output:
Before changing values...:
7 bit representation: 1111111 0000000 0000000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 11111110 00000000 00000000 00000000 00000000 00000000 00000000
After changing values...:
7 bit representation: 1000000 1001100 1110010 1011010 1010100 0000111 1111110 0000000
8 bit representation: 10000001 00110011 10010101 10101010 10000001 11111111 00000000
8 Bits: 11111111 to ulong: 255
7 Bits: 1111110 to ulong: 126
After changing values...:
7 bit representation: 0010000 0101010 0100000 0000000 0000000 0000000 0000000 0000000
8 bit representation: 00100000 10101001 00000000 00000000 00000000 00000000 00000000
It is very straight forward using a std::bitset in a class called BitVector. I implement one getter and setter. The getter returns also a std::bitset at the given index selIdx with a given template argument size M. The given idx will be multiplied by the given size M to get the right position. The returned bitset can also be converted to numerical or string values.
The setter uses an uint8_t value as input and again the index selIdx. The bits will be shifted to the right position into the bitset.
Further you can use the getter and setter with different sizes because of the template argument M, which means you can work with either 7 or 8 bit representation but also 3 or what ever you like.
I'm sure this code is not the best concerning speed, but I think it is a very clear and clean solution. Also it is not complete at all as there are just one getter, one setter and two constructors. Remember to implement error checking concerning indexes and sizes.
Code:
#include <iostream>
#include <bitset>
template <size_t N> class BitVector
{
private:
std::bitset<N> _data;
public:
BitVector (unsigned long num) : _data (num) { };
BitVector (const std::string& str) : _data (str) { };
template <size_t M>
std::bitset<M> getBits (size_t selIdx)
{
std::bitset<M> retBitset;
for (size_t idx = 0; idx < M; ++idx)
{
retBitset |= (_data[M * selIdx + idx] << (M - 1 - idx));
}
return retBitset;
}
template <size_t M>
void setBits (size_t selIdx, uint8_t num)
{
const unsigned char* curByte = reinterpret_cast<const unsigned char*> (&num);
for (size_t bitIdx = 0; bitIdx < 8; ++bitIdx)
{
bool bitSet = (1 == ((*curByte & (1 << (8 - 1 - bitIdx))) >> (8 - 1 - bitIdx)));
_data.set(M * selIdx + bitIdx, bitSet);
}
}
void print_7_8()
{
std:: cout << "\n7 bit representation: ";
for (size_t idx = 0; idx < (N / 7); ++idx)
{
std::cout << getBits<7>(idx) << " ";
}
std:: cout << "\n8 bit representation: ";
for (size_t idx = 0; idx < N / 8; ++idx)
{
std::cout << getBits<8>(idx) << " ";
}
}
};
int main ()
{
BitVector<56> num = 127;
std::cout << "Before changing values...:";
num.print_7_8();
num.setBits<8>(0, 0x81);
num.setBits<8>(1, 0b00110011);
num.setBits<8>(2, 0b10010101);
num.setBits<8>(3, 0xAA);
num.setBits<8>(4, 0x81);
num.setBits<8>(5, 0xFF);
num.setBits<8>(6, 0x00);
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
std::cout << "\n\n8 Bits: " << num.getBits<8>(5) << " to ulong: " << num.getBits<8>(5).to_ulong();
std::cout << "\n7 Bits: " << num.getBits<7>(6) << " to ulong: " << num.getBits<7>(6).to_ulong();
num = BitVector<56>(std::string("1001010100000100"));
std::cout << "\n\nAfter changing values...:";
num.print_7_8();
return 0;
}
Here is one approach without the manual shifting. This is just a crude POC, but hopefully you will be able to get something out of it. I don't know if you are able to easily transform your input into bitset, but i think it should be possible.
int bytes = 0x01234567;
bitset<32> bs(bytes);
cout << "Input: " << bs << endl;
for(int i = 0; i < 5; i++)
{
bitset<7> slice(bs.to_string().substr(i*7, 7));
cout << slice << endl;
}
Also this is probably much less performant then the bitshifting version, so i wouldn't recommend it for heavy lifting.
You can use this to get the index'th 7-bit element from in (note that it doesn't have proper end of array handling). Simple, fast.
int get7(const uint8_t *in, int index) {
int fidx = index*7;
int idx = fidx>>3;
int sidx = fidx&7;
return (in[idx]>>sidx|in[idx+1]<<(8-sidx))&0x7f;
}
You can use direct access or bulk bit packing/unpacking as in TurboPFor:Integer Compression
// Direct read access
// b : bit width 0-16 (7 in your case)
#define bzhi32(u,b) ((u) & ((1u <<(b))-1))
static inline unsigned bitgetx16(unsigned char *in,
unsigned idx,
unsigned b) {
unsigned bidx = b*idx;
return bzhi32( *(unsigned *)((uint16_t *)in+(bidx>>4)) >> (bidx& 0xf), b );
}

Bitwise operations and setting "flags"

So this is an update to my last post, but I'm still having a lot of trouble understanding how this works. So I was giving the main function:
void set_flag(int* flag_holder, int flag_position);
int check_flag(int flag_holder, int flag_position);
int main(int argc, char* argv[])
{
int flag_holder = 0;
int i;
set_flag(&flag_holder, 3);
set_flag(&flag_holder, 16);
set_flag(&flag_holder, 31);
for(i = 31; i >= 0; i--) {
printf("%d", check_flag(flag_holder, i));
if(i % 4 == 0)
printf(" ");
}
printf("\n");
return 0;
}
And for the assignment we are supposed to write the functions set_flag and check_flag, so that the output is equal to:
1000 0000 0000 0001 0000 0000 0000 1000
So from what I understand, were supposed to use the "set_flag" function to make sure that the nth bit is 1. And the "check_flag" function returns an integer that is 0 when the nth bit is 0, and 1 when it is 1. I don't understand what "set_flag" is really doing, and how 3, 16, and 31, will be saved as "flags" which then return as 1's in "check_flag".
When working with binary or hexadecimal values a common approach is the define a mask that we will apply to a main value.
You can easily set one or more bits to '1' with the inclusive OR operator '|'
eg: we want to set the bit#0 to '1'
main value 01011000 |
mask 00000001 =
result 01011001
To test a particular bit you can use the AND operator '&'
eg: we want to test the bit#3
main value 01011000 &
mask 00001000 =
result 00001000
note: you may need to properly format result; here the & operation will return either zero or non-zero (but nor necessary '1').
So here are the 2 functions set_flag and check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
flag_holder = flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
In these scenarios we need a binary mask to set/check only one bit. The code "int mask = 1 << flag_position;" builds this single bit mask, it basically sets the bit#0 to '1' then shift to the left to the #bit we want to set/check.
Function Set_flag and Function Check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
*flag_holder = *flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
Run it along with the main program and you will get the desired output.

Split array of m bytes into chunks of n bytes

I'm working on a program that manipulates brain data. It recieves a value represents the current magnitude of 8 commonly-recognized types of EEG (brain-waves). This data value is output as a series of eight 3-byte unsigned integers in little-endian format.
Here is a piece of my code:
if (extendedCodeLevel == 0 && code == ASIC_EEG_POWER_CODE)
{
fprintf(arq4, "EXCODE level: %d CODE: 0x%02X vLength: %d\n", extendedCodeLevel, code, valueLength );
fprintf(arq4, "Data value(s):" );
for( i=0; i<valueLength; i++ ) fprintf(arq4, " %d", value[0] & 0xFF );
}
The value value[0] is my output. It is the series of bytes that represents the brain waves. The current output file contains is the following data:
EXCODE level: 0x00 CODE: 0x83 vLength: 24
Data value(s): 16 2 17 5 3 2 22 1 2 1 0 0 0 4 0 0 3 0 0 5 1 0 4 8
What I need is to divide the sequence of bytes above into 3-byte chunks, to identify the EEG. The wave delta is represented by the first 3-byte sequence, theta is represented by the next bytes, and so on. How can I do it?
Assuming that you know that your input will always be exactly eight three-bit integers, all you need is a simple loop that reads three bytes from the input and writes them out as a four-byte value. The easiest way to do this is to treat the input as an array of bytes and then pull bytes off of this array in groups of three.
// Convert an array of eight 3-byte integers into an array
// of eight 4-byte integers.
void convert_3to4(const void* input, void* output)
{
uint32_t tmp;
uint32_t* pOut = output;
uint8_t* pIn = input;
int i;
for (i=0; i<24; i+=3)
{
tmp = pIn[i];
tmp += (pIn[i+1] << 8);
tmp += (pIn[i+2] << 16);
pOut[((i+2) / 3)] = tmp;
}
}
Like this? The last bytes are not be printed if are not aligned by 3. Do you need them?
for( i=0; i<valueLength; i+=3 ) fprintf(arq4, "%d %d %d - ", value[i] & 0xFF,
value[i+1] & 0xFF,
value[i+2] & 0xFF );
Converting eight 3-byte little endian character-steams into eight 4-byte integers is fairly trivial:
for( int i = 0; i < 24; ++i )
{
output[ i & 0x07 ] |= input[ i ] << ( i & 0x18 );
}
I think that (untested) code will do it. Assuming input is a 24-entry char array, and output is an eight-entry int array.
You might try s.th. like this:
union _32BitValue
{
uint8_t bytes[4];
uint32_t uval;
}
size_t extractUint32From3ByteSegemented(const std::vector<uint8_t>& rawData, size_t index, uint32_t& result)
{
// May be do some checks, if the vector size fits extracting the data from it,
// throwing exception or return 0 etc. ...
_32BitValue tmp;
tmp.bytes[0] = 0;
tmp.bytes[1] = rawData[index + 2];
tmp.bytes[2] = rawData[index + 1];
tmp.bytes[3] = rawData[index];
result = ntohl(tmp.uval);
return index + 3;
}
The code used to parse the values from the raw data array:
size_t index = 0;
std::vector<uint8_t> rawData = readRawData(); // Provide such method to read the raw data into the vector
std::vector<uint32_t> myDataValues;
while(index < rawData.size())
{
uint32_t extractedValue;
index = extractUint32From3ByteSegemented(rawData,index,extractedValue);
// Depending on what error detection you choose do check for index returned
// != 0, or catch exception ...
myDataValues.push_back(extractedValue);
}
// Continue with calculations on the extracted values ...
Using the left shift operator and addition as shown in other answers will do the trick as well. But IMHO this sample shows clearly what's going on. It fills the unions byte array with a value in big-endian (network) order and uses ntohl() to retrieve the result in the host machine's used format (big- or little-endian) portably.
What I need is, instead of displaying the whole sequence of 24 bytes, I need to get the 3-byte sequences separately.
You can easily copy the 1d byte array to the desired 2d shape.
Example:
#include <inttypes.h>
#include <stdio.h>
#include <string.h>
int main() {
/* make up data */
uint8_t bytes[] =
{ 16, 2, 17,
5, 3, 2,
22, 1, 2,
1, 0, 0,
0, 4, 0,
0, 3, 0,
0, 5, 1,
0, 4, 8 };
int32_t n_bytes = sizeof(bytes);
int32_t chunksize = 3;
int32_t n_chunks = n_bytes/chunksize + (n_bytes%chunksize ? 1 : 0);
/* chunkify */
uint8_t chunks[n_chunks][chunksize];
memset(chunks, 0, sizeof(uint8_t[n_chunks][chunksize]));
memcpy(chunks, bytes, n_bytes);
/* print result */
size_t i, j;
for (i = 0; i < n_chunks; i++)
{
for (j = 0; j < chunksize; j++)
printf("%02hhd ", chunks[i][j]);
printf("\n");
}
return 0;
}
The output is:
16 02 17
05 03 02
22 01 02
01 00 00
00 04 00
00 03 00
00 05 01
00 04 08
I used some of the examples here to come up with a solution, so I thought I'd share it. It could be a basis for an interface so that objects can transmit copies of themselves over a network with the hton and ntoh functions, which is actually what I am trying to do.
#include <iostream>
#include <string>
#include <exception>
#include <arpa/inet.h>
using namespace std;
void DispLength(string name, size_t var){
cout << "The size of " << name << " is : " << var << endl;
}
typedef int8_t byte;
class Bytes {
public:
Bytes(void* data_ptr, size_t size)
: size_(size)
{ this->bytes_ = (byte*)data_ptr; }
~Bytes(){ bytes_ = NULL; } // Caller is responsible for data deletion.
const byte& operator[] (int idx){
if((size_t)idx <= size_ && idx >= 0)
return bytes_[idx];
else
throw exception();
}
int32_t ret32(int idx) //-- Return a 32 bit value starting at index idx
{
int32_t* ret_ptr = (int32_t*)&((*this)[idx]);
int32_t ret = *ret_ptr;
return ret;
}
int64_t ret64(int idx) //-- Return a 64 bit value starting at index idx
{
int64_t* ret_ptr = (int64_t*)&((*this)[idx]);
int64_t ret = *ret_ptr;
return ret;
}
template <typename T>
T retVal(int idx) //-- Return a value of type T starting at index idx
{
T* T_ptr = (T*)&((*this)[idx]);
T T_ret = *T_ptr;
return T_ret;
}
protected:
Bytes() : bytes_(NULL), size_(0) {}
private:
byte* bytes_; //-- pointer used to scan for bytes
size_t size_;
};
int main(int argc, char** argv){
long double LDouble = 1.0;
Bytes bytes(&LDouble, sizeof(LDouble));
DispLength(string("LDouble"), sizeof(LDouble));
DispLength(string("bytes"), sizeof(bytes));
cout << "As a long double LDouble is " << LDouble << endl;
for( int i = 0; i < 16; i++){
cout << "Byte " << i << " : " << bytes[i] << endl;
}
cout << "Through the eyes of bytes : " <<
(long double) bytes.retVal<long double>(0) << endl;
return 0;
}
you can use bit manipulation operators
I would use, not following actual code, just show example
(for I =0 until 7){
temp val = Value && 111 //AND operation with 111
Value = Value >> 3; //to shift right
}
Some self documenting, maintainable code might look something like this (untested).
typedef union
{
struct {
uint8_t padding;
uint8_t value[3];
} raw;
int32_t data;
} Measurement;
void convert(uint8_t* rawValues, int32_t* convertedValues, int convertedSize)
{
Measurement sample;
int i;
memset(&sample, '\0', sizeof(sample));
for(i=0; i<convertedSize; ++i)
{
memcpy(&sample.raw.value[0], &rawValues[i*sizeof(sample.raw.value)], sizeof(sample.raw.value));
convertedValues[i]=sample.data;
}
}

How to read individual bits from an array?

Lets say i have an array dynamically allocated.
int* array=new int[10]
That is 10*4=40 bytes or 10*32=320 bits. I want to read the 2nd bit of the 30th byte or 242nd bit. What is the easiest way to do so? I know I can access the 30th byte using array[30] but accessing individual bits is more tricky.
bool bitset(void const * data, int bitindex) {
int byte = bitindex / 8;
int bit = bitindex % 8;
unsigned char const * u = (unsigned char const *) data;
return (u[byte] & (1<<bit)) != 0;
}
this is working !
#define GET_BIT(p, n) ((((unsigned char *)p)[n/8] >> (n%8)) & 0x01)
int main()
{
int myArray[2] = { 0xaaaaaaaa, 0x00ff00ff };
for( int i =0 ; i < 2*32 ; i++ )
printf("%d", GET_BIT(myArray, i));
return 0;
}
ouput :
0101010101010101010101010101010111111111000000001111111100000000
Be carefull of the endiannes !
First, if you're doing bitwise operations, it's usually
preferable to make the elements an unsigned integral type
(although in this case, it really doesn't make that much
difference). As for accessing the bits: to access bit i in an
array of n int's:
static int const bitsPerWord = sizeof(int) * CHAR_BIT;
assert( i >= 0 && i < n * bitsPerWord );
int wordIndex = i / bitsPerWord;
int bitIndex = i % bitsPerWord;
then to read:
return (array[wordIndex] & (1 << bitIndex)) != 0;
to set:
array[wordIndex] |= 1 << bitIndex;
and to reset:
array[wordIndex] &= ~(1 << bitIndex);
Or you can use bitset, if n is constant, or vector<bool> or
boost::dynamic_bitset if it's not, and let someone else do the
work.
You can use something like this:
!((array[30] & 2) == 0)
array[30] is the integer.
& 2 is an and operation which masks the second bit (2 = 00000010)
== 0 will check if the mask result is 0
! will negate that result, because we're checking if it's 1 not zero....
You need bit operations here...
if(array[5] & 0x1)
{
//the first bit in array[5] is 1
}
else
{
//the first bit is 0
}
if(array[5] & 0x8)
{
//the 4th bit in array[5] is 1
}
else
{
//the 4th bit is 0
}
0x8 is 00001000 in binary. Doing the anding masks all other bits and allows you to see if the bit is 1 or 0.
int is typically 32 bits, so you would need to do some arithmetic to get a certain bit number in the entire array.
EDITED based on comment below - array contains int of 32 bits, not 8 bits uchar.
int pos = 241; // I start at index 0
bool bit242 = (array[pos/32] >> (pos%32)) & 1;