I want to convert an unsigned short value from MSB first to LSB first. Did the below code but its not working. Can someone point the error what i did
#include <iostream>
using namespace std;
int main()
{
unsigned short value = 0x000A;
char *m_pCurrent = (char *)&value;
short temp;
temp = *(m_pCurrent+1);
temp = (temp << 8) | *(unsigned char *)m_pCurrent;
m_pCurrent += sizeof(short);
cout << "temp " << temp << endl;
return 0;
}
Here's a simple but slow implementation:
#include <cstdint>
const size_t USHORT_BIT = CHAR_BIT * sizeof(unsigned short);
unsigned short ConvertMsbFirstToLsbFirst(const unsigned short input) {
unsigned short output = 0;
for (size_t offset = 0; offset < USHORT_BIT; ++offset) {
output |= ((input >> offset) & 1) << (USHORT_BIT - 1 - offset);
}
return output;
}
You could easily template this to work with any numeric type.
What was wrong is that you first assigned the value's MSB to the temp's LSB, then shifted it again to MSB and assigned value's LSB to LSB. Basically, you had interchanged *(m_pCurrent + 1) and *m_pCurrent so the whole thing had no effect.
The simplified code:
#include <iostream>
using namespace std;
int main()
{
unsigned short value = 0x00FF;
short temp = ((char*) &value)[0]; // assign value's LSB
temp = (temp << 8) | ((char*) &value)[1]; // shift LSB to MSB and add value's MSB
cout << "temp " << temp << endl;
return 0;
}
Related
I'm trying to make a function that would return N number of bits of a given memory chunk, and optionally skipping M bits.
Example:
unsigned char *data = malloc(3);
data[0] = 'A'; data[1] = 'B'; data[2] = 'C';
read(data, 8, 4);
would skip 12 bits and then read 8 bits from the data chunk "ABC".
"Skipping" bits means it would actually bitshift the entire array, carrying bits from the right to the left.
In this example ABC is
01000001 01000010 01000011
and the function would need to return
0001 0100
This question is a follow up of my previous question
Minimal compilable code
#include <ios>
#include <cmath>
#include <bitset>
#include <cstdio>
#include <cstring>
#include <cstdlib>
#include <iostream>
using namespace std;
typedef unsigned char byte;
typedef struct bit_data {
byte *data;
size_t length;
} bit_data;
/*
Asume skip_n_bits will be 0 >= skip_n_bits <= 8
*/
bit_data *read(size_t n_bits, size_t skip_n_bits) {
bit_data *bits = (bit_data *) malloc(sizeof(struct bit_data));
size_t bytes_to_read = ceil(n_bits / 8.0);
size_t bytes_to_read_with_skip = ceil(n_bits / 8.0) + ceil(skip_n_bits / 8.0);
bits->data = (byte *) calloc(1, bytes_to_read);
bits->length = n_bits;
/* Hardcoded for the sake of this example*/
byte *tmp = (byte *) malloc(3);
tmp[0] = 'A'; tmp[1] = 'B'; tmp[2] = 'C';
/*not working*/
if(skip_n_bits > 0){
unsigned char *tmp2 = (unsigned char *) calloc(1, bytes_to_read_with_skip);
size_t i;
for(i = bytes_to_read_with_skip - 1; i > 0; i--) {
tmp2[i] = tmp[i] << skip_n_bits;
tmp2[i - 1] = (tmp[i - 1] << skip_n_bits) | (tmp[i] >> (8 - skip_n_bits));
}
memcpy(bits->data, tmp2, bytes_to_read);
free(tmp2);
}else{
memcpy(bits->data, tmp, bytes_to_read);
}
free(tmp);
return bits;
}
int main(void) {
//Reading "ABC"
//01000001 01000010 01000011
bit_data *res = read(8, 4);
cout << bitset<8>(*res->data);
cout << " -> Should be '00010100'";
return 0;
}
The current code returns 00000000 instead of 00010100.
I feel like the error is something small, but I'm missing it. Where is the problem?
Your code is tagged as C++, and indeed you're already using C++ constructs like bitset, however it's very C-like. The first thing to do I think would be to use more C++.
Turns out bitset is pretty flexible already. My approach would be to create one to store all the bits in our input data, and then grab a subset of that based on the number you wish to skip, and return the subset:
template<size_t N, size_t M, typename T = unsigned char>
std::bitset<N> read(size_t skip_n_bits, const std::array<T, M>& data)
{
const size_t numBits = sizeof(T) * 8;
std::bitset<N> toReturn; // initially all zeros
// if we want to skip all bits, return all zeros
if (M*numBits <= skip_n_bits)
return toReturn;
// create a bitset to store all the bits represented in our data array
std::bitset<M*numBits> tmp;
// set bits in tmp based on data
// convert T into bit representations
size_t pos = M*numBits-1;
for (const T& element : data)
{
for (size_t i=0; i < numBits; ++i)
{
tmp.set(pos-i, (1 << (numBits - i-1)) & element);
}
pos -= numBits;
}
// grab just the bits we need
size_t startBit = tmp.size()-skip_n_bits-1;
for (size_t i = 0; i < N; ++i)
{
toReturn[N-i-1] = tmp[startBit];
tmp <<= 1;
}
return toReturn;
}
Full working demo
And now we can call it like so:
// return 8-bit bitset, skip 12 bits
std::array<unsigned char, 3> data{{'A', 'B', 'C'}};
auto&& returned = read<8>(12, data);
std::cout << returned << std::endl;
Prints
00100100
which is precisely our input 01000001 01000010 01000011 skipping the first twelve bits (from the left towards the right), and only grabbing the next 8 available.
I'd argue this is a bit easier to read than what you've got, esp. from a C++ programmer's point of view.
I'm trying to figure out how to most efficiently parse the following into Hex segments with c++ 98.
//One lump, no delemiters
char hexData[] = "50FFFEF080";
and want parse out 50 FF FE & F080 (assuming I know hexData will be in this format every time) into base 10. Yielding something like:
var1=80
var2=255
var3=254
var4=61568
Here's one strategy.
Copy the necessary characters one at a time to a temporary string.
Use strtol to extract the numbers.
Program:
#include <stdio.h>
#include <stdlib.h>
int main()
{
char hexData[] = "50FFFEF080";
int i = 0;
int var[4];
char temp[5] = {};
char* end = NULL;
for ( i = 0; i < 3; ++i )
{
temp[0] = hexData[i*2];
temp[1] = hexData[i*2+1];
var[i] = (int)strtol(temp, &end, 16);
printf("var[%d]: %d\n", i, var[i]);
}
// The last number.
temp[0] = hexData[3*2];
temp[1] = hexData[3*2+1];
temp[2] = hexData[3*2+2];
temp[3] = hexData[3*2+3];
var[3] = (int)strtol(temp, &end, 16);
printf("var[3]: %d\n", var[3]);
return 0;
}
Output:
var[0]: 80
var[1]: 255
var[2]: 254
var[3]: 61568
You can convert all string to number and then use bitwise operations to get any bytes or bits. Try this
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
char hexData[] = "50FFFEF080";
uint64_t number; // 64bit number
// conversion from char-string to one big number
sscanf(hexData, "%llx", &number); // read as a hex number
uint64_t tmp = number; // just a copy of initial number to make bitwise operations
// use masks to get particular bytes
printf("%lld \n", tmp & 0xFFFF); // prints last two bytes as decimal number: 61568
// or copy to some other memory
unsigned int lastValue = tmp & 0xFFFF; // now lastValue has 61568 (0xF080)
tmp >>= 16; // remove last two bytes with right shift
printf("%lld \n", tmp & 0xFF); // prints the last byte 254
tmp >>= 8; // remove lass byte with right shift
printf("%lld \n", tmp & 0xFF); // prints 255
tmp >>= 8; // remove lass byte with right shift
printf("%lld \n", tmp & 0xFF); // prints 80
return 0;
}
#include <iostream>
#include <string>
int main() {
std::istringstream buffer("50FFFEF080");
unsigned long long value;
buffer >> std::hex >> value;
int var1 = value & 0xFFFF;
int var2 = (value >> 16) & 0xFF;
int var3 = (value >> 24) & 0xFF;
int var4 = (value >> 32) & 0xFF;
return 0;
}
I have an uint32_t variable and I want to modify randombly the first 10 less significant bits(0-9) and then,still randomly, I want to modify bits from 10th to 23th. I wrote this simple program in C++ and it works for the first 10 bits but not for the others. I can't understand why
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <iostream>
#include <math.h>
using namespace std;
void printuint(uint32_t value);
int main(){
uint32_t initval=0xFFFFFFFF;
uint32_t address;
uint32_t value;
uint32_t final;
address=rand()%1024;
address<<=23;
printf("address \n");
printuint(address);
printf("final\n");
final = (initval & address);
printuint(final);
return 0;
}
void printuint (uint32_t value){
while (value) {
printf("%d", value & 1);
value >>= 1;
}
cout<<endl;
}
Adding this
value = rand() % 16384;
printuint(value);
and modifing final = (initval & address) & value;
Here's an example of flipping random bits:
int main(void)
{
srand(time());
unsigned int value = 0;
for (unsigned int iterations = 0;
iterations < 10;
++iterations)
{
unsigned int bit_position_to_change = rand() % sizeof(unsigned int);
unsigned int bit_value = 1 << bit_position_to_change;
value = value ^ bit_value; // flip the bit.
std::cout << "Iteration: " << iterations
<< ", value: 0x" << hex << value
<< "\n";
}
return EXIT_SUCCESS;
}
The exclusive-OR function, represented by operator ^, is good for flipping bits.
Another method is to replace bits:
unsigned int bit_pattern;
unsigned int bit_mask; // contains a 1 bit in each position to replace.
value = value & ~bit_mask; // Clear bits using the mask
value = value | bit_pattern; // Put new bit pattern in place.
Sorry I solved my problem with more patience.
What I meant to do is this:
uint32_t initval;
uint32_t address(1023);
bitset<32> bits(address);
cout << bits.to_string() << endl;
uint32_t value(16383);
value<<=10;
bitset<32> bitsvalue(value);
cout << bitsvalue.to_string() << endl;
initval = address | value;
bitset<32> bitsinit(initval);
cout << bitsinit.to_string() << endl;
return 0;
Hey everyone this may turn out to be a simple stupid question, but one that has been giving me headaches for a while now. I'm reading data from a Named Binary Tag file, and the code is working except when I try to read big-endian numbers. The code that gets an integer looks like this:
long NBTTypes::getInteger(istream &in, int num_bytes, bool isBigEndian)
{
long result = 0;
char buff[8];
//get bytes
readData(in, buff, num_bytes, isBigEndian);
//convert to integer
cout <<"Converting bytes to integer..." << endl;
result = buff[0];
cout <<"Result starts at " << result << endl;
for(int i = 1; i < num_bytes; ++i)
{
result = (result << 8) | buff[i];
cout <<"Result is now " << result << endl;
}
cout <<"Done." << endl;
return result;
}
And the readData function:
void NBTTypes::readData(istream &in, char *buffer, unsigned long num_bytes, bool BE)
{
char hold;
//get data
in.read(buffer, num_bytes);
if(BE)
{
//convert to little-endian
cout <<"Converting to a little-endian number..." << endl;
for(unsigned long i = 0; i < num_bytes / 2; ++i)
{
hold = buffer[i];
buffer[i] = buffer[num_bytes - i - 1];
buffer[num_bytes - i - 1] = hold;
}
cout <<"Done." << endl;
}
}
This code originally worked (gave correct positive values), but now for whatever reason the values I get are either over or underflowing. What am I missing?
Your byte order swapping is fine, however building the integer from the sequences of bytes is not.
First of all, you get the endianness wrong: the first byte you read in becomes the most significant byte, while it should be the other way around.
Then, when OR-ing in the characters from the array, be aware that they are promoted to an int, which, for a signed char, sets a lot of additional bits unless you mask them out.
Finally, when long is wider than num_bytes, you need to sign-extend the bits.
This code works:
union {
long s; // Signed result
unsigned long u; // Use unsigned for safe bit-shifting
} result;
int i = num_bytes-1;
if (buff[i] & 0x80)
result.s = -1; // sign-extend
else
result.s = 0;
for (; i >= 0; --i)
result.u = (result.u << 8) | (0xff & buff[i]);
return result.s;
I'm working on a homework assignment for my C++ class. The question I am working on reads as follows:
Write a function that takes an unsigned short int (2 bytes) and swaps the bytes. For example, if the x = 258 ( 00000001 00000010 ) after the swap, x will be 513 ( 00000010 00000001 ).
Here is my code so far:
#include <iostream>
using namespace std;
unsigned short int ByteSwap(unsigned short int *x);
int main()
{
unsigned short int x = 258;
ByteSwap(&x);
cout << endl << x << endl;
system("pause");
return 0;
}
and
unsigned short int ByteSwap(unsigned short int *x)
{
long s;
long byte1[8], byte2[8];
for (int i = 0; i < 16; i++)
{
s = (*x >> i)%2;
if(i < 8)
{
byte1[i] = s;
cout << byte1[i];
}
if(i == 8)
cout << " ";
if(i >= 8)
{
byte2[i-8] = s;
cout << byte2[i];
}
}
//Here I need to swap the two bytes
return *x;
}
My code has two problems I am hoping you can help me solve.
For some reason both of my bytes are 01000000
I really am not sure how I would swap the bytes. My teachers notes on bit manipulation are very broken and hard to follow and do not make much sense me.
Thank you very much in advance. I truly appreciate you helping me.
New in C++23:
The standard library now has a function that provides exactly this facility:
#include <iostream>
#include <bit>
int main() {
unsigned short x = 258;
x = std::byteswap(x);
std::cout << x << endl;
}
Original Answer:
I think you're overcomplicating it, if we assume a short consists of 2 bytes (16 bits), all you need
to do is
extract the high byte hibyte = (x & 0xff00) >> 8;
extract the low byte lobyte = (x & 0xff);
combine them in the reverse order x = lobyte << 8 | hibyte;
It looks like you are trying to swap them a single bit at a time. That's a bit... crazy. What you need to do is isolate the 2 bytes and then just do some shifting. Let's break it down:
uint16_t x = 258;
uint16_t hi = (x & 0xff00); // isolate the upper byte with the AND operator
uint16_t lo = (x & 0xff); // isolate the lower byte with the AND operator
Now you just need to recombine them in the opposite order:
uint16_t y = (lo << 8); // shift the lower byte to the high position and assign it to y
y |= (hi >> 8); // OR in the upper half, into the low position
Of course this can be done in less steps. For example:
uint16_t y = (lo << 8) | (hi >> 8);
Or to swap without using any temporary variables:
uint16_t y = ((x & 0xff) << 8) | ((x & 0xff00) >> 8);
You're making hard work of that.
You only neeed exchange the bytes. So work out how to extract the two byte values, then how to re-assemble them the other way around
(homework so no full answer given)
EDIT: Not sure why I bothered :) Usefulness of an answer to a homework question is measured by how much the OP (and maybe other readers) learn, which isn't maximized by giving the answer to the homewortk question directly...
Here is an unrolled example to demonstrate byte by byte:
unsigned int swap_bytes(unsigned int original_value)
{
unsigned int new_value = 0; // Start with a known value.
unsigned int byte; // Temporary variable.
// Copy the lowest order byte from the original to
// the new value:
byte = original_value & 0xFF; // Keep only the lowest byte from original value.
new_value = new_value * 0x100; // Shift one byte left to make room for a new byte.
new_value |= byte; // Put the byte, from original, into new value.
// For the next byte, shift the original value by one byte
// and repeat the process:
original_value = original_value >> 8; // 8 bits per byte.
byte = original_value & 0xFF; // Keep only the lowest byte from original value.
new_value = new_value * 0x100; // Shift one byte left to make room for a new byte.
new_value |= byte; // Put the byte, from original, into new value.
//...
return new_value;
}
Ugly implementation of Jerry's suggestion to treat the short as an array of two bytes:
#include <iostream>
typedef union mini
{
unsigned char b[2];
short s;
} micro;
int main()
{
micro x;
x.s = 258;
unsigned char tmp = x.b[0];
x.b[0] = x.b[1];
x.b[1] = tmp;
std::cout << x.s << std::endl;
}
Using library functions, the following code may be useful (in a non-homework context):
unsigned long swap_bytes_with_value_size(unsigned long value, unsigned int value_size) {
switch (value_size) {
case sizeof(char):
return value;
case sizeof(short):
return _byteswap_ushort(static_cast<unsigned short>(value));
case sizeof(int):
return _byteswap_ulong(value);
case sizeof(long long):
return static_cast<unsigned long>(_byteswap_uint64(value));
default:
printf("Invalid value size");
return 0;
}
}
The byte swapping functions are defined in stdlib.h at least when using the MinGW toolchain.
#include <stdio.h>
int main()
{
unsigned short a = 258;
a = (a>>8)|((a&0xff)<<8);
printf("%d",a);
}
While you can do this with bit manipulation, you can also do without, if you prefer. Either way, you shouldn't need any loops though. To do it without bit manipulation, you'd view the short as an array of two chars, and swap the two chars, in roughly the same way as you would swap two items while (for example) sorting an array.
To do it with bit manipulation, the swapped version is basically the lower byte shifted left 8 bits ord with the upper half shifted left 8 bits. You'll probably want to treat it as an unsigned type though, to ensure the upper half doesn't get filled with one bits when you do the right shift.
This should also work for you.
#include <iostream>
int main() {
unsigned int i = 0xCCFF;
std::cout << std::hex << i << std::endl;
i = ( ((i<<8) & 0xFFFF) | ((i >>8) & 0xFFFF)); // swaps the bytes
std::cout << std::hex << i << std::endl;
}
A bit old fashioned, but still a good bit of fun.
XOR swap: ( see How does XOR variable swapping work? )
#include <iostream>
#include <stdint.h>
int main()
{
uint16_t x = 0x1234;
uint8_t *a = reinterpret_cast<uint8_t*>(&x);
std::cout << std::hex << x << std::endl;
*(a+0) ^= *(a+1) ^= *(a+0) ^= *(a+1);
std::cout << std::hex << x << std::endl;
}
This is a problem:
byte2[i-8] = s;
cout << byte2[i];//<--should be i-8 as well
This is causing a buffer overrun.
However, that's not a great way to do it. Look into the bit shift operators << and >>.