I have a 64 bit unsigned integer and for some reason I have to store it inside a string. What I am wondering is that the value inside the string is same even after using the swapped integer?
For example:
#include <iostream>
#include <byteswap.h>
using namespace std;
int main()
{
uint64_t foo = 98;
uint64_t foo_reversed = bswap_64(foo);
std::string out = "";
out.append(reinterpret_cast<const char*>(&foo), sizeof(foo));
std::string out_reversed = "";
out_reversed.append(reinterpret_cast<const char*>(&foo_reversed), sizeof(foo_reversed));
std::cout << "out: " << out << std::endl;
std::cout << "out_reversed: " << out_reversed << std::endl;
return 0;
}
The string out and out_reversed have the exact same value, but I expect it to be different as the underlying integer foo and foo_reversed are swapped value of each other.
What am I missing here? Pardon me if it is a trivial mistake, but putting it out here on the chance that I'm missing some concept.
The output I see:
out: b
out_reversed: b
I was not expecting the above value for out_reversed
You can see the same thing with arrays of char:
int main()
{
char foo[8] = { 98, 0, 0, 0, 0, 0, 0, 0 };
char foo_reversed[8] = { 0, 0, 0, 0, 0, 0, 0, 98 };
std::string out(foo, 8);
std::string out_reversed(foo_reversed, 8);
std::cout << "out: " << out << std::endl;
std::cout << "out_reversed: " << out_reversed << std::endl;
return 0;
}
The chars with value 0 aren't printable, so the terminal doesn't display them
Here's an alternative printing.
Related
The conversion you're trying to do is not just converting the QByteArray data to an int.
For example, assuming that QByteArray a [3] contains [0] = 36, 1 = 23, and [2] = 12 data, use the b variable in the form of int b in the form b = 362312. I want to do it.
In order to use QByteArray data as int data, QByteArray variable is assigned to QString variable and cast to string variable.
And I tried to print out the string variable, but the strange unknown data is output.
I tried to cast a string using toInt () after confirming that the string is printed normally, but the string is printed as strange characters.
So I could not do toInt ().
I run a lot of tests and the code is messy.
The reason I didn't delete the comment was not to show that I've tried various things.
if (QCanBus::instance()->plugins().contains(QStringLiteral("socketcan"))) {
qWarning() << "plugin available";
}
QString errorString;
QCanBusDevice *device = QCanBus::instance()->createDevice(
QStringLiteral("socketcan"), QStringLiteral("vcan0"), &errorString);
if (!device) {
qWarning() << errorString;
} else {
device->connectDevice();
std::cout << "connected vcan0" << std::endl;
device->connect(device, &QCanBusDevice::framesReceived, [this, device]() {
QCanBusFrame frame = device->readFrame();
QString testV = frame.toString();
// int testI = testV.split(" ")[0].toInt();
QString qvSpeed = frame.payload();
// int a = frame.payload().length();
std::string text = testV.toUtf8().constData();
std::string vSpeed = qvSpeed.toLocal8Bit().constData();
//At that point the vVal values are being updated in real time.
//I want to pass the updated vVal to qml gui in real time.
// int vVal = static_cast<int>(frame.payload()[0]);
// for(int i = 0; i < frame.payload().length(); ++i)
// std::cout << std::hex << static_cast<int>(frame.payload()[i]);
// std::cout << std::endl;
// if(vVal)
// int tSpeed = static_cast<int>(frame.payload()[0]);
// std::stringstream stream;
// stream <<
testVal1 += 0;
// if(frame.frameId() == 001)
// testVal2 = static_cast<int>(frame.payload()[0]);
// testVal2 += 20;
// duration += 200;
// emit sendMessage(testVal1, testVal2);
std::cout << vSpeed << std::endl;
// if(frame.frameId() == 001)
// std::cout << testI << std::endl;
// std::cout << "--------------" << std::hex << static_cast<int>(frame.payload()[0]) << "----------------" << std::endl;
});
}
Finally, QByteArray a [3] = {32, 34, 12};
Assuming you have data, you want to use this like int b = 323412.
In order to do that, I thought it would convert to a string and then to an integer, but even strings are not normal output.
I'm also attaching the strange string that is currently printed out below.
enter image description here
Since QByteArray stores its data as char * you can just cast its internal data, for example:
#include <QCoreApplication>
#include <iostream>
#include <iomanip>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
uint32_t num = 1234;
std::cout << "original number:" << num << std::endl;
std::cout << "bytes: " << std::endl;
QByteArray arr(reinterpret_cast<char *>(&num), sizeof(uint32_t));
for(int i = 0;i < arr.size();i ++)
{
std::cout << " 0x" << std::hex << std::setfill('0') << std::setw(2) << (arr.at(i) & 0xFF) << std::endl;
}
uint32_t *num2 = reinterpret_cast<uint32_t *>(arr.data());
std::cout << "casted number :" << std::dec << *num2 << std::endl;
return a.exec();
}
But I would not recommend this 'C' approach since it is fraught with errors.
Btw, I've never seen CAN data converted to QString. Usually it just 8 bytes of data, you worth cast it to a data struct instead, for example:
struct Data
{
uint32_t value1;
uint32_t value2;
} inData, outData;
inData.value1 = 1234;
inData.value2 = 5678;
QByteArray arr(reinterpret_cast<char *>(&inData), sizeof(Data));
outData = *reinterpret_cast<Data *>(arr.data());
//memcpy(&outData, arr.data(), static_cast<size_t>(arr.size())); // or this
I have tried to simplify the original code to a simple test example which replicates the issue that I am having. I do apologize for the simple question in advance.. I am a beginner with C++.
So moving on the actual question.. why do I get 0 as an output? For the purposes of my this example and for my understanding, functions should not be modified with the exception of the numerical values in them should it be required (meaning I got it wrong:).
Many thanks in advance.
static unsigned short buffer[5];
void settingMemory()
{
memset(buffer, 0, sizeof(buffer));
}
void copingMemory(const unsigned short *pixels)
{
memcpy(&buffer[5], pixels, 5*sizeof(unsigned short));
}
void printingMemory()
{
unsigned short *test = buffer;
std::cout << *test << std::endl;
std::cout << *test++ << std::endl;
std::cout << *test++ << std::endl;
std::cout << *test++ << std::endl;
std::cout << *test++ << std::endl;
std::cout << *test++ << std::endl;
}
int main(int argc, char* argv[])
{
settingMemory();
unsigned short test[5];
test[0] = 5;
test[1] = 55;
test[2] = 555;
test[3] = 5555;
test[4] = 55555;
copingMemory(test);
printingMemory();
}
My output is:
0
0
0
0
0
0
The line memcpy(&buffer[5], pixels, 5*sizeof(unsigned short)); copies to the start of the 6th element of buffer (i.e. the first element /outside/ the buffer. Replace it with memcpy(&buffer[0], pixels, 5*sizeof(unsigned short));, so you copy it to the 1st element instead.
I have a bitset in which I need to store a # of randomly generated integers (store its bit representation of course). So, the thing is that I am confuse on how to do that.
i.e suppose that I generate the integers (all unsigned int) 8, 15 , 20, one at a time. How can I store the recently generated integer in my existing bit set.
Say that I start by generating "8" and store in the bitset, then I generate "15" and store it in the bitset.
I don't know or don't understand how to store those values within the bitset.
Note: I know in advance the size of the bitset, the size is based on the number of integers that I am going to generate, and that I know too. So, at the end what I need is a bitset with all the bits set matching the bits of all the generated integers.
I'll Appreciate your help.
How can I store the recently generated integer in my existing bit set.
You can generate a temporary bitset form the integer and then assign values between the two bitsets.
Example program:
#include <iostream>
#include <bitset>
#include <cstdlib>
int main()
{
const int size = sizeof(int)*8;
std::bitset<2*size> res;
std::bitset<size> res1(rand());
std::bitset<size> res2(rand());
for ( size_t i = 0; i < size; ++i )
{
res[i] = res1[i];
res[size+i] = res2[i];
}
std::cout << "res1: " << res1 << std::endl;
std::cout << "res2: " << res2 << std::endl;
std::cout << "res: " << res << std::endl;
return 0;
}
Output:
res1: 01101011100010110100010101100111
res2: 00110010011110110010001111000110
res: 0011001001111011001000111100011001101011100010110100010101100111
Update
A function to set the bitset values given an integer can be used to avoid the cost of creating temporary bitsets.
#include <iostream>
#include <bitset>
#include <cstdlib>
#include <climits>
const int size = sizeof(int)*8;
void setBitsetValue(std::bitset<2*size>& res,
int num,
size_t bitsetIndex,
size_t numIndex)
{
if ( numIndex < size )
{
res[bitsetIndex] = (num >> numIndex) & 0x1;
setBitsetValue(res, num, bitsetIndex+1, numIndex+1);
}
}
int main()
{
std::bitset<2*size> res;
int num1 = rand()%INT_MAX;
int num2 = rand()%INT_MAX;
std::bitset<size> res1(num1);
std::bitset<size> res2(num2);
std::cout << "res1: " << res1 << std::endl;
std::cout << "res2: " << res2 << std::endl;
setBitsetValue(res, num1, 0, 0);
setBitsetValue(res, num2, size, 0);
std::cout << "res: " << res << std::endl;
return 0;
}
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;
I have this array : BYTE set[6] = { 0xA8,0x12,0x84,0x03,0x00,0x00, } and i need to insert this value : "" int Value = 1200; "" ....on last 4 bytes. Practically to convert from int to hex and then to write inside the array...
Is this possible ?
I already have BitConverter::GetBytes function, but that's not enough.
Thank you,
To answer original quesion: sure you can.
As soon as your sizeof(int) == 4 and sizeof(BYTE) == 1.
But I'm not sure what you mean by "converting int to hex". If you want a hex string representation, you'll be much better off just using one of standard methods of doing it.
For example, on last line I use std::hex to print numbers as hex.
Here is solution to what you've been asking for and a little more (live example: http://codepad.org/rsmzngUL):
#include <iostream>
using namespace std;
int main() {
const int value = 1200;
unsigned char set[] = { 0xA8,0x12,0x84,0x03,0x00,0x00 };
for (const unsigned char* c = set; c != set + sizeof(set); ++c) {
cout << static_cast<int>(*c) << endl;
}
cout << endl << "Putting value into array:" << endl;
*reinterpret_cast<int*>(&set[2]) = value;
for (const unsigned char* c = set; c != set + sizeof(set); ++c) {
cout << static_cast<int>(*c) << endl;
}
cout << endl << "Printing int's bytes one by one: " << endl;
for (int byteNumber = 0; byteNumber != sizeof(int); ++byteNumber) {
const unsigned char oneByte = reinterpret_cast<const unsigned char*>(&value)[byteNumber];
cout << static_cast<int>(oneByte) << endl;
}
cout << endl << "Printing value as hex: " << hex << value << std::endl;
}
UPD: From comments to your question:
1. If you need just getting separate digits out of the number in separate bytes, it's a different story.
2. Little vs Big endianness matters as well, I did not account for that in my answer.
did you mean this ?
#include <stdio.h>
#include <stdlib.h>
#define BYTE unsigned char
int main ( void )
{
BYTE set[6] = { 0xA8,0x12,0x84,0x03,0x00,0x00, } ;
sprintf ( &set[2] , "%d" , 1200 ) ;
printf ( "\n%c%c%c%c", set[2],set[3],set[4],set[5] ) ;
return 0 ;
}
output :
1200