Reading more than 2048 bytes from QLocalSocket - c++

I have a problem reading more than 2048 bytes from a QLocalSocket.
This is my server-side code:
clientConnection->flush(); // <-- clientConnection is a QLocalSocket
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_5_0);
out << (quint16)message.size() << message; // <--- message is a QString
qint64 c = clientConnection->write(block);
clientConnection->waitForBytesWritten();
if(c == -1)
qDebug() << "ERROR:" << clientConnection->errorString();
clientConnection->flush();
And this is how I read the data in my client:
QDataStream in(sock); // <--- sock is a QLocalSocket
in.setVersion(QDataStream::Qt_5_0);
while(sock->bytesAvailable() < (int)sizeof(quint16)){
sock->waitForReadyRead();
}
in >> bytes_to_read; // <--- quint16
while(sock->bytesAvailable() < (int)bytes_to_read){
sock->waitForReadyRead();
}
in >> received_message;
The client code is connected to the readyRead signal and it's disconnected after the first call to the slot.
Why I'm able to read only 2048 bytes?
==EDIT==
After peppe's reply I updated my code. Here is how it looks now:
server side code:
clientConnection->flush();
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_5_0);
out << (quint16)0;
out << message;
out.device()->seek(0);
out << (quint16)(block.size() - sizeof(quint16));
qDebug() << "Bytes client should read" << (quint16)(block.size() - sizeof(quint16));
qint64 c = clientConnection->write(block);
clientConnection->waitForBytesWritten();
client side code:
QDataStream in(sock);
in.setVersion(QDataStream::Qt_5_0);
while(sock->bytesAvailable() < sizeof(quint16)){
sock->waitForReadyRead();
}
quint16 btr;
in >> btr;
qDebug() << "Need to read" << btr << "and we have" << sock->bytesAvailable() << "in sock";
while(sock->bytesAvailable() < btr){
sock->waitForReadyRead();
}
in >> received_message;
qDebug() << received_message;
I'm still not able to read more data.

out.setVersion(QDataStream::Qt_5_0);
out << (quint16)message.size() << message; // <--- message is a QString
This is wrong. The serialized length of "message" will be message.size() * 2 + 4 bytes, as QString prepends its own length as a quint32, and each QString character is actually a UTF-16 code unit, so it requires 2 bytes. Expecting only message.size() bytes to read in the reader will cause QDataStream to short read, which is undefined behaviour.
Please do check the size of "block" after those lines -- it'll be 2 + 4 + 2 * message.size() bytes. So you need to fix the math. You can safely assume it won't change, as the format of serialization of Qt datatypes is known and documented.
I do recognize the "design pattern" of prepending the length, though. It probably comes from the fortune network example shipped with Qt. The notable difference there is that the example doesn't prepend the length of the string in UTF-16 code units (which is pointless, as it's not how it's going to be serialized) -- it prepends the length of the serialized QString. Look at what it does:
out << (quint16)0;
out << fortunes.at(qrand() % fortunes.size());
out.device()->seek(0);
out << (quint16)(block.size() - sizeof(quint16));
First it reserves some space in the output, by writing a 0. Then it serializes a QString. Then it backtracks and overwrites the 0 with the length of the serialized QString -- which at this point, is exactly block.size() minus the prepended integer stating the lenght (and we know that the serialized length of a quint16 is sizeof(quint16))
To repeat myself, there actually two reasons about why that example was coded that way, and they're somehow related:
QDataStream has no means to recover from short reads: all the data it needs to successfully decode an object must be available when you use the operator>> to deserialize the object. Therefore, you cannot use it before being sure that all data was received. Which brings us to:
TCP has no built in mechanism for separating data in "records". You can't just send some bytes followed by a "record marker" which tells the receiver that he has received all the data pertinent to a record. What TCP provides is a raw stream of bytes. Eventually, you can (half-)close the connection to signal the other peer that the transmission is over.
1+2 imply that you must use some other mechanism to know (on the receiver side) if you already have all the data you need or you must wait for some more. For instance, you can introduce in-band markers like \r\n (like IRC or - up to a certain degree - HTTP do).
The solution in the fortune example is prepending to the "actual" data (the serialized QString with the fortune message) the length, in bytes, of that data; then it sends the length (as a 16 bit integer) followed by the data itself.
The receiver first reads the length; then it reads up that many bytes, then it knows it can decode the fortune. If there's not enough data available (both for the length - i.e. you received less than 2 bytes - and the payload itself) the client simply does nothing and waits for more.
Note that:
the design ain't new: it's what all most protocols do. In the "standard" TCP/IP stack, TCP, IP, Ethernet and so on all have a field in their "headers" which specify the lenght of the payload (or of the whole "record");
the transmission of the "length" uses a 16bit unsigned integer sent in a specific byte order: it's not memcpy()d into the buffer, but QDataStream is used on it to both store it and read it back. Although it may seem trivial, this actually completes the definition of the protocol you're using.
if QDataStream had been able to recover from short reads (f.i. by throwing an exception and leaving the data in the device), you would not have needed to send the length of the payload, since QDataStream already sends the length of the string (as a 32 bit unsigned bigendian integer) followed by the UTF-16 chars.

Related

Why do we use char* as a buffer, why not a string in boost::asio?

Recently I have been reading a book about network programming with boost::asio, and from what I have understood, buffer is like any other memory space in address space of the program and we assign that to the socket so that we can do I/O operations.
The first thing I don't understand is, why do we need a separate thing called "buffer"? Why not just write the content in a string, and then when we receive put it in the string?
The second thing I don't understand is, why is char* or char[] used as a buffer, why not int[], which can store ASCII values of everything that comes through? It's just a memory, after all. I feel like I'm missing something here, please help me out.
Thirdly, why can't C++ std::string be used as a buffer? Every time, they have to be converted to C strings.
I think both the answers, by giving arguments against string or int[] miss the general point:
Boost Doesn't Make That Choice For You
In other words
You Are Free To Use All Of These To Your Taste
Demo Live On Coliru:
#include <boost/asio.hpp>
#include <iostream>
template <typename Buffer>
size_t test_request(Buffer response) {
using boost::asio::ip::tcp;
boost::asio::io_context io;
tcp::socket s(io);
s.connect({ boost::asio::ip::address_v4{{1,1,1,1}}, 80 });
write(s, boost::asio::buffer("GET / HTTP/1.1\r\n"
"Host: 1.1.1.1\r\n"
"Referer: stoackoverflow.com\r\n"
"\r\n"));
boost::system::error_code ec;
auto bytes = read(s, response, ec);
std::cerr << "test_request: " << ec.message() << " at " << bytes << " bytes\n";
return bytes;
}
#include <iomanip>
int main() {
std::string s;
std::vector<char> vec(4096);
int ints[1024];
{
auto n = test_request(boost::asio::buffer(vec));
vec.resize(n);
}
// or use the ints[]
test_request(boost::asio::buffer(ints));
// use a dynamic buffer (that grows):
test_request(boost::asio::dynamic_buffer(s));
auto report = [](std::string_view sv) {
std::cout << sv.length() << " bytes\n"
<< " first: " << std::quoted(sv.substr(0, sv.find_first_of("\r\n"))) << "\n"
<< " last: " << std::quoted(sv.substr(sv.find_last_of("\r\n", sv.size()-3)+1)) << "\n";
};
std::cout << "String response: "; report(s);
std::cout << "Vector response: "; report({vec.data(), vec.size()});
}
Prints
test_request: End of file at 909 bytes
test_request: End of file at 909 bytes
test_request: End of file at 909 bytes
String response: 909 bytes
first: "HTTP/1.1 301 Moved Permanently"
last: "</html>
"
Vector response: 909 bytes
first: "HTTP/1.1 301 Moved Permanently"
last: "</html>
"
Summary
The whole point is not opinionism on text encodings or whatnot¹.
The whole point is
Don't Pay For What You Don't Use (extra conversions take allocations and just cost performance)
Non-Intrusive Framework (the framework should not dictate what vocabulary types you must use)
¹ (std::string is suitable for, say UTF8 or ASCII7 or indeed binary data - it will handle NUL chars just fine).
why not just write the content in a string, and then when we receive it put in the string.
Because Boost ASIO is a library for binary IO; not for textual IO. And std::string is for representing text. Technically, you could use a std::string as a buffer for binary data, but doing so would be unconventional and confusing.
why not int[]
Because the narrow character types are special in the C++ language.
Generally objects of one type cannot be observed as objects of another type. For example, if you have a short object and a long long object and want to send those over the network, you cannot "observe" at those objects as being (arrays) of int objects, because they aren't int obects. But, every object can be "observed" (through reinterpretation) as arrays of narrow character objects. That is the unique feature of char, unsigned char and std::byte which is why they are used as buffers for serialisation. And also that their size is exactly one byte which the fundamental storage unit in the C++ memory model.
which can store ASCII value
Which is largely irrelevant in binary IO since ASCII is a text encoding. It would also be quite wasteful to use 16 bits (at minimum; 32 bits on most systems) to represent ASCII that is a 7 bit encoding.
What you're missing is the fact that the content of a buffer might not actually represent a string, it's just a block of memory that could represent anything. That should also explain why std::string shouldn't be used as a buffer.
The reason why char is used as a type is that it (usually) has a size of one byte, so the buffer is actually just an array of bytes, and having char as the type allows easy per-byte manipulation of that memory (like, for example, copying a block of memory at a specific byte offset into/out of the buffer, etc).

Qt, client - server relationship

I am newcomer in area of network and internet,therefore want to apologize for may be stupid question. I do not understand whether there are other ways to send data from client socket to server's axcept putting data into a stream using method QIODevice::write(QByteArray& ). If that is the only way how server should recognize what exactly data has been sent to it ? For example, we may have QString message as a usual input data, but also sometimes QString as the name of further receiver of future data. It is possible to describe all variants but the slot connected to readyRead() signal seems to be of enormous
size at this case.
Eventually, is there way to direct data to some exact server's functions ?
Qt Solutions has a library to make Qt servers easily:
Qt Solutions
And Json format it is a beautiful way to communicate
You need to define comman data type both side( client and server ). before you sent data packet you should write size of data packet to first four byte of data packet. at the server side check size of data receiving from client with first four bytes. and deserialize data which you how did you serialize at client side. I used this method a long time and there is any problem occured to today. i will give you sample code for you.
Client Side:
QBuffer buffer;
buffer.open(QIODevice::ReadWrite);
QDataStream in(&buffer);
in.setVersion(QDataStream::Qt_5_2);
in << int(0); // for packet size
in << int(3); // int may be this your data type or command
in << double(4); // double data
in << QString("asdsdffdggfh"); //
in << QVariant("");
in << .... // any data you can serialize which QDatastream accept
in.device()->seek(0); // seek packet fisrt byte
in << buffer.data().size(); // and write packet size
array = buffer.data();
this->socket->write(arr);
this->socket->waitForBytesWritten();
Server Side:
QDatastream in(socket);
//define this out of this scope and globally
int expectedByte = -1;
if( expectedByte < socket->bytesAvailable() && expectedByte == -1 )
{
in >> expectedByte;
}
if(expectedByte - socket->bytesAvailable()- (int)sizeof(int) != 0){
return;
}
// if code here, your packet received completely
int commandOrDataType;
in >> commandOrDataType;
double anyDoubleValue;
in >> anyDoubleValue;
QString anyStringValue;
in >> anyStringValue;
QVariant anyVariant;
in >> anyVariant;
// and whatever ...
// do something with above data
//you must set expectedByte = -1;
// if your proccessing doing any thing at this time there is no any data will be received while expectedByte != -1, but may be socket buffer will be filling. you should comfirm at the begining of this function
expectedByte = -1;
Hope this helpfully! :)

Sending a flexible Amount of Data over Network by using Asio (Boost)

I got a client and a server application which will send each other data by using the Asio (Standalone) library. Both applications consists of two (logical) parts:
A high level part: dealing with complex objects e.g. users, permissions,...
A low level part: sending data over network between client and server
Let's assume the complex objects are already serialized by using Protocoll Buffers and the low level part of the application receives the data as std::string from the high level part. I would like to use this function from Protocoll Buffers for this job:
bool SerializeToString(string* output) const;: serializes the message
and stores the bytes in the given string. Note that the bytes are
binary, not text; we only use the string class as a convenient
container.
And say I transfer this data with async_write on the client side:
size_t dataLength = strlen(data);
//writes a certain number of bytes of data to a stream.
asio::async_write(mSocket,
asio::buffer(data, dataLength),
std::bind(&Client::writeCallback, this,
std::placeholders::_1,
std::placeholders::_2));
How can I read this data on the server side? I don't know how much data I will have to read. Therefore this will not work (length is unknown):
asio::async_read(mSocket,
asio::buffer(mResponse, length),
std::bind(&Server::readCallback, this,
std::placeholders::_1,
std::placeholders::_2));
What is the best way to solve this problem? I could think of two solutions:
Append a 'special' character at the end of data and read until I reach this 'end of data signal'. The problem is, what if this character appears in data somehow? I don't know how Protocoll Buffers serializes my data.
Send a binary string with size_of_data + data instead of data. But I don't know how to serialize the size in an platform independent way, add it to the binary data and extract it again.
Edit: Maybe I could use this:
uint64_t length = strlen(data);
uint64_t nwlength = htonl(length);
uint8_t len[8];
len[0] = nwlength >> 56;
len[1] = nwlength >> 48;
len[2] = nwlength >> 40;
len[3] = nwlength >> 32;
len[4] = nwlength >> 24;
len[5] = nwlength >> 16;
len[6] = nwlength >> 8;
len[7] = nwlength >> 0;
std::string test(len);
mRequest = data;
mRequest.insert(0, test);
and send mRequest to the server? Any traps or caveats with this code?
How could I read the length on server side and the content afterwards?
Maybe like this:
void Server::readHeader(){
asio::async_read(mSocket,
asio::buffer(header, HEADER_LENGTH),
std::bind(&Server::readHeaderCallback, this,
std::placeholders::_1,
std::placeholders::_2),
asio::transfer_exactly(HEADER_LENGTH));
}
void Server::readHeaderCallback(const asio::error_code& error,
size_t bytes_transferred){
if(!error && decodeHeader(header, mResponseLength)){
//reading header finished, now read the content
readContent();
}
else{
if(error) std::cout << "Read failed: " << error.message() << "\n";
else std::cout << "decodeHeader failed \n";
}
}
void Server::readContent(){
asio::async_read(mSocket,
asio::buffer(mResponse, mResponseLength),
std::bind(&Server::readContentCallback, this,
std::placeholders::_1,
std::placeholders::_2),
asio::transfer_exactly(mResponseLength));
}
void Server::readContentCallback(const asio::error_code& error,
size_t bytes_transferred){
if (!error){
//handle content
}
else{
//#todo remove this cout
std::cout << "Read failed: " << error.message() << "\n";
}
}
Please note that I try to use transfer_exactly. Will this work?
When sending variable length messages over a stream-based protocol, there are generally three solutions to indicate message boundaries:
Use a delimiter to specify message boundaries. The async_read_until() operations provide a convenient way to read variable length delimited messages. When using a delimiter, one needs to consider the potential of a delimiter collision, where the delimiter appears within the contents of a message, but does not indicate a boundary. There are various techniques to handle delimiter collisions, such as escape characters or escape sequences.
Use a fixed-length header with a variable-length body protocol. The header will provide meta-information about the message, such as the length of the body. The official Asio chat example demonstrates one way to handle fixed-length header and variable-length body protocols.
If binary data is being sent, then one will need to consider handling byte-ordering. The hton() and ntoh() family of functions can help with byte-ordering. For example, consider a protocol that defines the field as two bytes in network-byte-order (big-endian) and a client reads the field as a uint16_t. If the value 10 is sent, and a little-endian machine reads it without converting from network-order to local-order, then the client will read the value as 2560. The Asio chat example avoids handling endianness by encoding the body length to string instead of a binary form.
Use the connection's end-of-file to indicate the end of a message. While this makes sending and receiving messages easy, it limits the sender to only one message per connection. To send an additional message, one would need to established another connection.
A few observations about the code:
The Protocol Buffers' SerializeToString() function serializes a message to a binary form. One should avoid using text based functions, such as strlen(), on the serialized string. For instance, strlen() may incorrectly determine the length, as it will treat the first byte with a value of 0 as the terminating null byte, even if that byte is part of the encoded value.
When providing an explicitly sized buffer to an operation via asio::buffer(buffer, n), the default completion condition of transfer_all will function the same as transfer_exactly(n). As such, the duplicate use of variables can be removed:
asio::async_read(mSocket,
asio::buffer(header, HEADER_LENGTH),
std::bind(&Server::readHeaderCallback, this,
std::placeholders::_1,
std::placeholders::_2));
The htonl() overloads support uint16_t and uint_32t, not uint64_t.
Asio supports scatter/gather operations, allowing a receive operation to scatter-read into multiple buffers, and transmit operations can gather-write from multiple buffers. As such, one does not necessarily need to have both the fixed-length header and message-body contained with a single buffer.
std::string body_buffer;
body.SerializeToString(&body_buffer);
std::string header_buffer = encode_header(body_buffer.size());
// Use "gather-write" to send both the header and data in a
// single write operation.
std::vector<boost::asio::const_buffer> buffers;
buffers.push_back(boost::asio::buffer(header_buffer));
buffers.push_back(boost::asio::buffer(body_buffer));
boost::asio::write(socket_, buffers);
client must call
socket.shutdown(asio::ip::tcp::socket::shutdown_both);
socket.close();
on the server size read until EOF detected
std::string reveive_complete_message(tcp::socket& sock)
{
std::string json_msg;
asio::error_code error;
char buf[255];
while (1)
{
//read some data up to buffer size
size_t len = sock.read_some(asio::buffer(buf), error);
//store the received buffer and increment the total return message
std::string str(buf, len);
json_msg += str;
if (error == asio::error::eof)
{
//EOF received, the connection was closed by client
break;
}
else if (error)
{
throw asio::system_error(error);
}
}
return json_msg;
}

Retrieve correct data with two consecutive calls to boost::asio::read

I am currently implementing a network protocol with Boost Asio. The domain classes already exist and I am able to
write packets to a std::istream
and read packets from a std::ostream.
A Network Packet contains a Network Packet Header. The header starts with the Packet Length field, which has a size of two bytes (std::uint16_t).
I am using TCP/IPv4 as the transport layer, therefore I try to implement the following:
Read the length of the packet to know its total length. This means reading exactly two bytes from the socket.
Read the rest of the packet. This means reading exactly kActualPacketLength - sizeof(PacketLengthFieldType) bytes from the socket.
Concat both read binary data.
Therefore I need at least two calls to boost::asio::read (I am starting synchronously!).
I am able to read a packet with one call to boost::asio::read if I hard-code the expected length:
Packet const ReadPacketFromSocket() {
boost::asio::streambuf stream_buffer;
boost::asio::streambuf::mutable_buffers_type buffer{
stream_buffer.prepare(Packet::KRecommendedMaximumSize)};
std::size_t const kBytesTransferred{boost::asio::read(
this->socket_,
buffer,
// TODO: Remove hard-coded value.
boost::asio::transfer_exactly(21))};
stream_buffer.commit(kBytesTransferred);
std::istream input_stream(&stream_buffer);
PacketReader const kPacketReader{MessageReader::GetInstance()};
return kPacketReader.Read(input_stream);
}
This reads the complete packet data at once and returns a Packet instance. This works, so the concept is working.
So far so good. Now my problem:
If I make two consecutive calls to boost::asio::read with the same boost::asio::streambuf I can't get it to work.
Here is the code:
Packet const ReadPacketFromSocket() {
std::uint16_t constexpr kPacketLengthFieldSize{2};
boost::asio::streambuf stream_buffer;
boost::asio::streambuf::mutable_buffers_type buffer{
stream_buffer.prepare(Packet::KRecommendedMaximumSize)};
std::size_t const kBytesTransferred{boost::asio::read(
// The stream from which the data is to be read.
this->socket_,
// One or more buffers into which the data will be read.
buffer,
// The function object to be called to determine whether the read
// operation is complete.
boost::asio::transfer_exactly(kPacketLengthFieldSize))};
// The received data is "committed" (moved) from the output sequence to the
// input sequence.
stream_buffer.commit(kBytesTransferred);
BOOST_LOG_TRIVIAL(debug) << "bytes transferred: " << kBytesTransferred;
BOOST_LOG_TRIVIAL(debug) << "size of stream_buffer: " << stream_buffer.size();
std::uint16_t packet_size;
// This does seem to modify the streambuf!
std::istream istream(&stream_buffer);
istream.read(reinterpret_cast<char *>(&packet_size), sizeof(packet_size));
BOOST_LOG_TRIVIAL(debug) << "size of stream_buffer: " << stream_buffer.size();
BOOST_LOG_TRIVIAL(debug) << "data of stream_buffer: " << std::to_string(packet_size);
std::size_t const kBytesTransferred2{
boost::asio::read(
this->socket_,
buffer,
boost::asio::transfer_exactly(packet_size - kPacketLengthFieldSize))};
stream_buffer.commit(kBytesTransferred2);
BOOST_LOG_TRIVIAL(debug) << "bytes transferred: " << kBytesTransferred2;
BOOST_LOG_TRIVIAL(debug) << "size of stream_buffer: " << stream_buffer.size();
// Create an input stream with the data from the stream buffer.
std::istream input_stream(&stream_buffer);
PacketReader const kPacketReader{MessageReader::GetInstance()};
return kPacketReader.Read(input_stream);
}
I have the following problems:
Reading the packet length from the boost::asio::streambuf after the first socket read seems to remove the data from the boost::asio::streambuf.
If I use two distinct boost::asio::streambuf instances I do not know how to "concat" / "append" them.
At the end of the day I need a std::istream with the correct data obtained from the socket.
Can someone please guide me into the correct direction? I've tried to make this work for several hours now...
Maybe this approach isn't the best, so I am open to suggestions to improve my design.
Thanks!
I believe the behaviour is by design.
To concatenate the buffers, you can use BUfferSequences (using make_buffers) and use buffer iterators, or you can stream the second into the first:
boost::asio::streambuf a, b;
std::ostream as(&a);
as << &b;
Now you can throw away b as it's pending data have been appended to a
See it Live on Coliru
Before I forget, I want to summarize my current solution, which doesn't use a boost::asio::streambuf, since it seems to be impossible to read from it without modifying it. Instead I use a std::vector<std::uint8_t> (ByteVector) as the data holder for the buffers.
The following source code contains my current solution:
Packet const ReadPacketFromSocket() {
ByteVector const kPacketLengthData{this->ReadPacketLengthFromSocket()};
PacketHeader::PacketLengthType kPacketLength{
static_cast<PacketHeader::PacketLengthType>(
(kPacketLengthData[1] << 8) | kPacketLengthData[0])};
ByteVector rest_packet_data(Packet::KRecommendedMaximumSize);
boost::asio::read(
this->socket_,
boost::asio::buffer(rest_packet_data),
boost::asio::transfer_exactly(
kPacketLength - sizeof(PacketHeader::PacketLengthType)));
ByteVector data{
VectorUtils::GetInstance().Concatenate(
kPacketLengthData,
rest_packet_data)};
// Create an input stream from the vector.
std::stringstream input_stream;
input_stream.rdbuf()->pubsetbuf(
reinterpret_cast<char *>(&data[0]), data.size());
PacketReader const kPacketReader{MessageReader::GetInstance()};
return kPacketReader.Read(input_stream);
}
ByteVector ReadPacketLengthFromSocket() {
ByteVector data_holder(sizeof(PacketHeader::PacketLengthType));
boost::asio::read(
this->socket_,
boost::asio::buffer(data_holder),
boost::asio::transfer_exactly(sizeof(PacketHeader::PacketLengthType)));
return data_holder;
}
This works like a charm, I have successfully exchanged packets with messages from my domain model between two processes using this approach.
But: This solution feels wrong, since I have to do lots of conversions. Maybe someone else can provide me with a cleaner approach? What do you think about my solution?

Qt send full byte on serial port

I have this simple code that use QtSerialPort:
char foo[] = {130,50,'\0'};
serial->write(foo);
The result on my serial is {2, 50}. I think that che biggest number that I can send is 128 (char go from -128 to 128). Where is the manner for send number from 0 to 255? I try to use unsigned char but the method "write" don't work with it. The same problem also appear with QByteArray.
Thankyou all.
The QIODevice interface has the default char for sending which may be compiler dependent. See the documentation for details.
qint64 QIODevice::write(const char * data, qint64 maxSize)
Writes at most maxSize bytes of data from data to the device. Returns the number of bytes that were actually written, or -1 if an error occurred.
However, you should not be concerned much if you take the data properly on the other hand. you can still send the greater than 128 values through as signed, but they will come across as negative values, for instance 0xFF will be -1.
If you take the same logic in the reverse order on the receiving end, there should be no problems about it.
However, this does not seem to relate to your issue because you do not get the corresponding negative value for 130, but you get it chopped at 7 bits. Make sure you connection is 8 data bit.
You can set that explicitly after opening the port with this code:
QSerialPort serialPort;
QString serialPortName = "foo";
serialPort.setPortName(serialPortName);
if (!serialPort.open(QIODevice::WriteOnly)) {
standardOutput << QObject::tr("Failed to open port %1, error: %2").arg(serialPortName).arg(serialPort.errorString()) << endl;
return 1;
}
if (!serialPort.setDataBits(QSerialPort::Data8)) {
standardOutput << QObject::tr("Failed to set 8 data bits for port %1, error: %2").arg(serialPortName).arg(serialPort.errorString()) << endl;
return 1;
}
// Other setup code here
char foo[] = {130,50,'\0'};
serialPort.write(foo);
Make sure you've set the serial port to send 8 bits, using QSerialPort::DataBits
The fact that '130' is coming as '2' implies that the most significant bit is being truncated.