Qt: Display byte from QByteArray - c++

This might be a stupid question but I can't seem to find how to display a byte from a QByteArray as "01011000" for example.

That's because the function is not related to the scope of QByteArray, which is a simple byte container. Instead, you need to get the specific byte (as char) to print and show singles bits from it. For instance, try this (magic):
char myByte = myByteArray.at(0);
for (int i = 7; i >= 0; --i) {
std::cout << ((myByte >> i) & 1);
}
Assuming that your machine has 8-bit bytes (which is not as a bold statement as it would have been 20 years ago).

Related

convert specific bits of a QByteArray to int,float double etc

I have a QByteArray, inside the QByteArray are multiple values of diferent datatypes which I want to extract. The difficulty by that is, that the values are in an defined length of x-bits, and they have a defined start position (also in bits).
eg: an int8 is stored inside the QByteArray from bit nr. 4 (of the first byte) to bit nr. 12 (inside the second byte).
On the Qt-wiki side i found a method to disassemble a byteArray into a bit array: https://wiki.qt.io/Working_with_Raw_Data
So i'm cutting my bits out of the byte array like that:
QByteArray MyCutter::cutMessage(qint32 bitStart, qint32 bitLength)
{
qDebug() << mBuffer;
QBitArray bits(mBufferLen * 8);
for(quint32 i=0; i<mBufferLen; ++i)
{
for(quint32 b=0; b<8; b++)
{
bits.setBit(i*8 +b, mBuffer.at(i) & (1<<(7-b)));
}
}
qDebug() << bits;
QByteArray bytes;
//round up to the next n*8 length of a byte: (length + y) = x*8
qint32 bitLengthWithZeros = (bitLength + (8 - 1)) & ~(8 - 1);
bytes.resize(bitLengthWithZeros/8);
bytes.fill(0);
for(qint32 b=bitStart ,c=0; b<(bitStart + bitLength); b++, c++)
{
bytes[c/8] = (bytes.at(c/8) | ((bits.testBit(b)?1:0)<<(7-b%8)));
}
qDebug() << bytes;
return bytes.data();
}
This is working fine so far - I can cut my ByteArray into any other.
The Problem is to convert the values into int/float/doubles, and to be more specific into signed values.
To convert i've tried two things:
QByteArray::toHex().toLong(nullptr, 16) ... toLong/toLongLong etc. This is working, but only returns me the UNSIGNED value of the QByteArray. If I'm cutting the mBuffer with the function MyCutter::cutMessage, like the excample above, from the 4. bit to the 12. (which is also 0xFF) im getting 255 as signed int! And that's wrong?
On the other side I've tried to convert it with QStremData:
QDataStream stream(mBuffer);
stream.setByteOrder(QDataStream::LittleEndian);
qint64 result;
stream >> result;
qDebug() << QString::number(result,16);
qDebug() << QString::number(result);
mBuffer are the raw data. If im putting "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF" insde mBuffer, the printed value is -1 which is correct.
QByteArray h = cutMessage(0,8);
qDebug() << h.toHex().toLongLong(nullptr, 16);
QDataStream stream2(h);
stream2.setByteOrder(QDataStream::LittleEndian);
qint64 result2;
stream2 >> result2;
qDebug() << QString::number(result2,16);
qDebug() << QString::number(result2);
Converting the cut-Message with the code block above is always returning me "0".
So without cutting, the interpretation of the whole QByteArray is correct, but if I'am cutting something off it's returning me always the unsigned value, or "0".
Somehow I'am loosing some information while the transformation into QBitArray and vice versa.
Hopefully my explanations are understandable ;)

Compare uint32_t with a loaded char[] from file C++

I have a binary file from which I load whole text in unsigned char[] and a variable const uint32_t LITTLE_ENDIAN_ID = 0x49696949;
I need to compare first four characters from loaded char[] with given uint32_t.
Is that possible somehow?
If buff is your unsigned char[] buffer, you can do:
memcmp((unsigned char*)&LITTLE_ENDIAN_ID, buff, 4) == 0
memcmp is defined in string.h
yes, it's absolutely possible, but your question is underspecified. What you want to do is to take the first 4 characters of your character array and convert them into a uint32_t; the obvious question: which character corresponds to which byte of the 32-bit int? This is probably equivalent of asking if these bytes are stored in little-endian or big-endian order. Though now that I see your LITTLE_ENDIAN_ID I realize that it doesn't matter - it's (oddly) the same forwards and backwards.
Anyhow, what you want is either:
unsigned char[] text = ...
uint32_t x = text[0] << 24 + text[1] << 16 + text[2] << 8 + text[3];
if (x == LITTLE_ENDIAN_ID)
// do something
Or the same thing, but with
uint32_t x = text[3] << 24 + text[2] << 16 + text[1] << 8 + text[0];
Alternatively we could do something a little more unusual like
union {
uint32_t int_value;
unsigned char[4] characters;
} converter;
unsigned char[] text = ...
converter x;
for (int i=0; i < 4; i++)
x.characters[i] = text[i];
if (x.int_value == LITTLE_ENDIAN_ID)
// do something
This is probably closer to what you want if you are actually looking to test the endianness of the current system.

Need to change a string to binary, then to ASCII

I have some data coming in from a sensor. The data is in the range of a signed int, 16 bits or so. I need to send the data out via Bluetooth.
Problem:
The data is -1564, lets say.The Bluetooth transmits -, 1, 5, 6, then 4. This is inefficient. I can process the data on the PC later, I just need the frequency to go up.
My Idea/ Solution:
Have it convert to binary, then to ASCII for output. I can convert the ASCII later in processing. I have the binary part (found on StackOverflow) here:
inline void printbincharpad(char c)
{
for (int i = 7; i >= 0; --i)
{
putchar( (c & (1 << i)) ? '1' : '0' );
}
}
This outputs in binary very well. But getting the bluetooth to transmit, say 24, spits out 1, 1, 0, 0, then 0. In fact, slower than just 2, then 4.
Say I have 65062, 5 bytes to transmit, coming out of the sensor. That is 1111111000100110 in binary, 16 bytes. To ASCII, it's �& (yes, the character set here is small, I know, but it's unique) just 2 bytes! In HEX it's FE26, 4 bytes. A savings of 3 vs decimal and 14 vs. binary and 2 vs. Hex. Ok, obviously, I want ASCII sent out here.
My Question:
So, how do I convert to ASCII if given a binary input?
I want to send that, the ASCII
Hedging:
Yes, I code in MatLab more than C++. This is for a microcontroller. The BAUD is 115200. No, I don't know how the above code works, I don't know where putchar's documentation is found. If you knw of a library that I need to run this, please tell me, as I do not know.
Thank you for any and all help or advice, I do appreciate it.
EDIT: In response to some of the comments: it's two 16 bit registers I am reading from, so data loss is impossible.
putchar writes to the standard output, which is usually the console.
You may take a look at the other output functions in the cstdio (or stdio.h) library.
Anyways, using putchar(), here's one way to achieve what you're asking for:
void print_bytes (int n)
{
char *p = (char *) &n ;
for (size_t i = 0; i < sizeof (n); ++i) {
putchar (p [i]) ;
}
}
If you know for certain that you only want 16 bits from the integer, you can simplify this like this:
void print_bytes (int n)
{
char b = n & 0xff ;
char a = (n >> 8) & 0xff ;
putchar (a) ;
putchar (b) ;
}
Looks like when you say ASCII, you mean Base 256. You can search for solutions to converting from Base 10 to Base 256.
Here is a C program that converts an string containing 65062 (5 characters) to a string of 2 characters:
#include <stdio.h>
#include <stdlib.h>
int main()
{
char* inputString="65062";
int input;
char* tmpString;
char* outString;
int Counter;
input = atoi(inputString);
outString= malloc (sizeof(input) + 1);
tmpString = &input;
for (Counter=0; Counter < sizeof(input) ; Counter++) {
outString[Counter] = tmpString[Counter];
}
outString[sizeof(input)] = '\0';
printf ("outString = %s\n", outString);
free(outString);
}

Stringstream write() problem while "overwriting"

Currently, I have a stringstream called Data. I am seeking to the beginning of the stringstream by using:
Data.seekp(0, std::ios::beg);
Then, I try writing 2 integers to the first 8 bytes of the stringstream (previously, the first 8 bytes were set to 0)
Data.write(reinterpret_cast<char*>(&dataLength),sizeof(int));
Data.write(reinterpret_cast<char*>(&dataFlags),sizeof(int));
Using the Visual C++ debugger and when I set a breakpoint, I can see that dataLength is equal to 12, and dataFlags is equal to 0, so therefore it should be writing 12 and 0 respectively.
After writing the 2 integers, it seemed to have no effect. I then print my stringstream data using the following code:
char* b = const_cast<char*>(Data.str().c_str());
for (int i = 0; i < dataLength; i++)
{
printf("%02X ",(unsigned char)b[i]);
}
I can see that the first 8 bytes of my data are still 0's even though I just overwrote the first 12 bytes with two integers (where the first integer != 0).
Why isn't the data in my stringstream being overwritten properly?
char* b = const_cast<char*>(Data.str().c_str());
Data.str() is a temporary, which is destroyed at the end of this statement; the value of that temporary's c_str() can only be used while the temporary is alive (and you've made no modifications to it, the invalidation rules are complex for std::string). You can never use b without Undefined Behavior.
std::string b = Data.str();
for (int i = 0; i < b.size(); i++) {
printf("%02X ", (unsigned char) b[i]);
}
I assume you really want to write the string "12" into the stringstream. You can't convert the int 12 to a char* by merely casting the int to char*. That is, I believe this part of your might be incorrect:
reinterpret_cast<char*>(&dataLength)
If dataLength is really an int, this is not the correct way to turn it into a char*. Perhaps this:
Data << dataLength << dataFlags;
I hope I haven't totally misunderstood what you're trying to achieve.

Bitwise operators and converting an int to 2 bytes and back again

My background is php so entering the world of low-level stuff like char is bytes, which are bits, which is binary values, etc is taking some time to get the hang of.
What I am trying to do here is sent some values from an Ardunio board to openFrameWorks (both are c++).
What this script currently does (and works well for one sensor I might add) when asked for the data to be sent is:
int value_01 = analogRead(0); // which outputs between 0-1024
unsigned char val1;
unsigned char val2;
//some Complicated bitshift operation
val1 = value_01 &0xFF;
val2 = (value_01 >> 8) &0xFF;
//send both bytes
Serial.print(val1, BYTE);
Serial.print(val2, BYTE);
Apparently this is the most reliable way of getting the data across.
So now that it is send via serial port, the bytes are added to a char string and converted back by:
int num = ( (unsigned char)bytesReadString[1] << 8 | (unsigned char)bytesReadString[0] );
So to recap, im trying to get 4 sensors worth of data (which I am assuming will be 8 of those serialprints?) and to have int num_01 - num_04... at the end of it all.
Im assuming this (as with most things) might be quite easy for someone with experience in these concepts.
Write a function to abstract sending the data (I've gotten rid of your temporary variables because they don't add much value):
void send16(int value)
{
//send both bytes
Serial.print(value & 0xFF, BYTE);
Serial.print((value >> 8) & 0xFF, BYTE);
}
Now you can easily send any data you want:
send16(analogRead(0));
send16(analogRead(1));
...
Just send them one after the other.
Note that the serial driver lets you send one byte (8 bits) at a time. A value between 0 and 1023 inclusive (which looks like what you're getting) fits in 10 bits. So 1 byte is not enough. 2 bytes, i.e. 16 bits, are enough (there is some extra space, but unless transfer speed is an issue, you don't need to worry about this wasted space).
So, the first two bytes can carry the data for your first sensor. The next two bytes carry the data for the second sensor, the next two bytes for the third sensor, and the last two bytes for the last sensor.
I suggest you use the function that R Samuel Klatchko suggested on the sending side, and hopefully you can work out what you need to do on the receiving side.
int num = ( (unsigned char)bytesReadString[1] << 8 |
(unsigned char)bytesReadString[0] );
That code will not do what you expect.
When you shift an 8-bit unsigned char, you lose the extra bits.
11111111 << 3 == 11111000
11111111 << 8 == 00000000
i.e. any unsigned char, when shifted 8 bits, must be zero.
You need something more like this:
typedef unsigned uint;
typedef unsigned char uchar;
uint num = (static_cast<uint>(static_cast<uchar>(bytesReadString[1])) << 8 ) |
static_cast<uint>(static_cast<uchar>(bytesReadString[0]));
You might get the same result from:
typedef unsigned short ushort;
uint num = *reinterpret_cast<ushort *>(bytesReadString);
If the byte ordering is OK. Should work on Little Endian (x86 or x64), but not on Big Endian (PPC, Sparc, Alpha, etc.)
To generalise the "Send" code a bit --
void SendBuff(const void *pBuff, size_t nBytes)
{
const char *p = reinterpret_cast<const char *>(pBuff);
for (size_t i=0; i<nBytes; i++)
Serial.print(p[i], BYTE);
}
template <typename T>
void Send(const T &t)
{
SendBuff(&t, sizeof(T));
}