Bit masking with hexadecimal in c++ - c++

I need to mask my output in binary with hexadecimals variables. Do I need to convert the binary output to hexadecimal (or hexadecimals variables to binary)? Or is there any way in C++ to directly mask them and store it to a new variable?
#Edit : The binary output is stored to a std::bitset variable.

The use of bitset wasn't mentioned in your question, improve on that next time.
You need to create a bitmask for the hex value as well. Then you can just & the bitmasks
#include <bitset>
#include <iostream>
int main()
{
std::bitset<8> value{ 0x03 };
std::bitset<8> mask{ 0x01 };
std::bitset<8> masked_value = value & mask;
std::cout << value.to_string() << " & " << mask.to_string() << " = " << masked_value.to_string() << "\n";
}

Related

Arrays of enum's packed into bit fields in MSVC++

Unsing MS Studio 2022 I am trying to pack two items into a union of size 16 bits but I am having problems with the correct syntax.
The first item is an unsigned short int so no problems there. The other is an array of 5 items, all two bits long. So imagine:
enum States {unused, on, off};
// Should be able to store this in a 2 bit field
then I want
States myArray[5];
// Should be able to fit in 10 bits and
// be unioned with my unsigned short
Unfortunatly I am completely failing to work out the correct syntax which leads to my array fitting into 16 bits. Any ideas?
You can't do that. An array is an array, not some packed bits.
What you can do is using manual bit manipulation:
#include <iostream>
#include <cstdint>
#include <bitset>
#include <climits>
enum status {
on = 0x03,
off = 0x01,
unused = 0x00
};
constexpr std::uint8_t status_bit_width = 2;
std::uint16_t encode(status s,std::uint8_t id, std::uint16_t status_vec) {
if(id >= (CHAR_BIT * sizeof(std::uint16_t)) / status_bit_width) {
std::cout << "illegal id" << std::endl;
return status_vec;
}
std::uint8_t bit_value = s;
status_vec |= bit_value << (id*status_bit_width);
return status_vec;
};
int main(void) {
std::uint16_t bits = 0;
bits = encode(on,1,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(off,2,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(unused,3,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(off,4,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(off,7,bits);
std::cout << std::bitset<16>(bits) << std::endl;
bits = encode(on,8,bits);
}

Inputting data to stringstream with hexadecimal representation

I am attempting to extract a hash-digest in hexadecimal via a stringstream, but I cannot get it to work when iterating over data.
Using std::hex I can do this easily with normal integer literals, like this:
#include <sstream>
#include <iostream>
std::stringstream my_stream;
my_stream << std::hex;
my_stream << 100;
std::cout << my_stream.str() << std::endl; // Prints "64"
However when I try to push in data from a digest it just interprets the data as characters and pushes them into the stringstream. Here is the function:
#include <sstream>
#include <sha.h> // Crypto++ library required
std::string hash_string(const std::string& message) {
using namespace CryptoPP;
std::stringstream buffer;
byte digest[SHA256::DIGESTSIZE]; // 32 bytes or 256 bits
static SHA256 local_hash;
local_hash.CalculateDigest(digest, reinterpret_cast<byte*>(
const_cast<char*>(message.data())),
message.length());
// PROBLEMATIC PART
buffer << std::hex;
for (size_t i = 0; i < SHA256::DIGESTSIZE; i++) {
buffer << *(digest+i);
}
return buffer.str();
}
The type byte is just a typedef of unsigned char so I do not see why this would not input correctly. Printing the return value using std::cout gives the ASCI mess of normal character interpretation. Why does it work in the first case, and not in the second case?
Example:
std::string my_hash = hash_string("hello");
std::cout << hash << std::endl; // Prints: ",≥M║_░ú♫&Φ;*┼╣Γ₧←▬▲\▼ºB^s♦3bôïÿ$"
First, the std::hex format modifier applies to integers, not to characters. Since you are trying to print unsigned char, the format modifier is not applied. You can fix this by casting to int instead. In your first example, it works because the literal 100 is interpreted as an integer. If you replace 100 with e.g. static_cast<unsigned char>(100), you would no longer get the hexadecimal representation.
Second, std::hex is not enough, since you likely want to pad each character to a 2-digit hex value (i.e. F should be printed as 0F). You can fix this by also applying the format modifiers std::setfill('0') and std::setw(2) (reference, reference).
Applying these modifications, your code would then look like this:
#include <iomanip>
...
buffer << std::hex << std::setfill('0') << std::setw(2);
for (size_t i = 0; i < SHA256::DIGESTSIZE; i++) {
buffer << static_cast<int>(*(digest+i));
}

Appending bits in C/C++

I want to append two unsigned 32bit integers into 1 64 bit integer. I have tried this code, but it fails. However, it works for 16bit integers into 1 32 bit
Code:
char buffer[33];
char buffer2[33];
char buffer3[33];
/*
uint16 int1 = 6535;
uint16 int2 = 6532;
uint32 int3;
*/
uint32 int1 = 653545;
uint32 int2 = 562425;
uint64 int3;
int3 = int1;
int3 = (int3 << 32 /*(when I am doing 16 bit integers, this 32 turns into a 16)*/) | int2;
itoa(int1, buffer, 2);
itoa(int2, buffer2, 2);
itoa(int3, buffer3, 2);
std::cout << buffer << "|" << buffer2 << " = \n" << buffer3 << "\n";
Output when the 16bit portion is enabled:
1100110000111|1100110000100 =
11001100001110001100110000100
Output when the 32bit portion is enabled:
10011111100011101001|10001001010011111001 =
10001001010011111001
Why is it not working? Thanks
I see nothing wrong with this code. It works for me. If there's a bug, it's in the code that's not shown.
Version of the given code, using standardized type declarations and iostream manipulations, instead of platform-specific library calls. The bit operations are identical to the example given.
#include <iostream>
#include <iomanip>
#include <stdint.h>
int main()
{
uint32_t int1 = 653545;
uint32_t int2 = 562425;
uint64_t int3;
int3 = int1;
int3 = (int3 << 32) | int2;
std::cout << std::hex << std::setw(8) << std::setfill('0')
<< int1 << " "
<< std::setw(8) << std::setfill('0')
<< int2 << "="
<< std::setw(16) << std::setfill('0')
<< int3 << std::endl;
return (0);
}
Resulting output:
0009f8e9 000894f9=0009f8e9000894f9
The bitwise operation looks correct to me. When working with bits, hexadecimal is more convenient. Any bug, if there is one, is in the code that was not shown in the question. As far as "appending bits in C++" goes, what you have in your code appears to be correct.
Try declaring buffer3 as buffer3[65]
Edit:
Sorry.
But I don't understand what the complaint is about.
In fact the answer is just as expected. You can infer it from your own result for the 16 bit input.
Since when you are oring the 32 '0' bits in lsb with second integer it will have leading zeroes in msb (when assigned to a 32 bit int which is in the signature of atoi) which are truncated in atoi (only the integer value equivalent will be read in the string, hence the string has to be 0X0 terminated, otherwise it would have a determinable size), giving the result.

Flip bits using XOR 0xffffffff or ~ in C++?

If I want to flip some bits, I was wondering which way is better. Should I flip them using XOR 0xffffffff or by using ~?
I'm afraid that there will be some cases where I might need to pad bits onto the end in one of these ways and not the other, which would make the other way safer to use. I'm wondering if there are times when it's better to use one over the other.
Here is some code that uses both on the same input value, and the output values are always the same.
#include <iostream>
#include <iomanip>
void flipBits(unsigned long value)
{
const unsigned long ORIGINAL_VALUE = value;
std::cout << "Original value:" << std::setw(19) << std::hex << value << std::endl;
value ^= 0xffffffff;
std::cout << "Value after XOR:" << std::setw(18) << std::hex << value << std::endl;
value = ORIGINAL_VALUE;
value = ~value;
std::cout << "Value after bit negation: " << std::setw(8) << std::hex << value << std::endl << std::endl;
}
int main()
{
flipBits(0x12345678);
flipBits(0x11223344);
flipBits(0xabcdef12);
flipBits(15);
flipBits(0xffffffff);
flipBits(0x0);
return 0;
}
Output:
Original value: 12345678
Value after XOR: edcba987
Value after bit negation: edcba987
Original value: 11223344
Value after XOR: eeddccbb
Value after bit negation: eeddccbb
Original value: abcdef12
Value after XOR: 543210ed
Value after bit negation: 543210ed
Original value: f
Value after XOR: fffffff0
Value after bit negation: fffffff0
Original value: ffffffff
Value after XOR: 0
Value after bit negation: 0
Original value: 0
Value after XOR: ffffffff
Value after bit negation: ffffffff
Use ~:
You won't be relying on any specific width of the type; for example, int is not 32 bits on all platforms.
It removes the risk of accidentally typing one f too few or too many.
It makes the intent clearer.
As you're asking for c++ specifically, simply use std::bitset
#include <iostream>
#include <iomanip>
#include <bitset>
#include <limits>
void flipBits(unsigned long value) {
std::bitset<std::numeric_limits<unsigned long>::digits> bits(value);
std::cout << "Original value : 0x" << std::hex << value;
value = bits.flip().to_ulong();
std::cout << ", Value after flip: 0x" << std::hex << value << std::endl;
}
See live demo.
As for your mentioned concerns, of just using the ~ operator with the unsigned long value, and having more bits flipped as actually wanted:
Since std::bitset<NumberOfBits> actually specifies the number of bits, that should be operated on, it will well solve such problems correctly.

How to convert binary data to an integral value

Question
What is the best way to convert binary to it's integral representation?
Context
Let's imagine that we have a buffer containing binary data obtained from an external source such as a socket connection or a binary file. The data is organised in a well defined format and we know that the first four octets represent a single unsigned 32 bit integer (which could be the size of following data). What would be the more efficient way to covert those octets to a usable format (such as std::uint32_t)?
Example
Here is what I have tried so far:
#include <algorithm>
#include <array>
#include <cstdint>
#include <cstring>
#include <iostream>
int main()
{
std::array<char, 4> buffer = { 0x01, 0x02, 0x03, 0x04 };
std::uint32_t n = 0;
n |= static_cast<std::uint32_t>(buffer[0]);
n |= static_cast<std::uint32_t>(buffer[1]) << 8;
n |= static_cast<std::uint32_t>(buffer[2]) << 16;
n |= static_cast<std::uint32_t>(buffer[3]) << 24;
std::cout << "Bit shifting: " << n << "\n";
n = 0;
std::memcpy(&n, buffer.data(), buffer.size());
std::cout << "std::memcpy(): " << n << "\n";
n = 0;
std::copy(buffer.begin(), buffer.end(), reinterpret_cast<char*>(&n));
std::cout << "std::copy(): " << n << "\n";
}
On my system, the result of the following program is
Bit shifting: 67305985
std::memcpy(): 67305985
std::copy(): 67305985
Are they all standard compliant or are they using implementation defined behaviour?
Which one is the more efficient?
Is there an bette way to make that conversion?
You essentially are asking about endianness. While your program might work on one computer, it might not on another. If the "well defined format" is network order, there are a standard set of macros/functions to convert to and from network order to the natural order for your specific machine.