QT socket read losing bytes - c++

I'm trying to work with length-preceded TCP messages using Qt. I have following method:
QByteArray con::read()
{
QByteArray s;
s = _pSocket->read(4);
if (s.length() == 4) {
int size = char_to_int32(s);
s = _pSocket->read(size);
}
return s;
}
Well, it does not work. Looks like I lose all data after reading first 4 bytes: the first read works fine, but read(size) returns nothing. Is there a way to solve this?
The char_to_int32 is:
int char_to_int32(QByteArray s)
{
int size = 0;
size |= (s.at(0) << 24);
size |= (s.at(1) << 16);
size |= (s.at(2) << 8);
size |= (s.at(3));
return size;
}
EDIT :
The sending function (plain C):
int send(int connfd, const unsigned char* message, unsigned int size) {
int c;
unsigned char* bytes = (unsigned char*) malloc(4 + size);
int32_to_char(size, bytes); // converts message size to 4 bytes
memcpy(bytes + 4, message, size);
c = write(connfd, bytes, 4 + size);
free(bytes);
if (c <= 0)
return -1;
else
return 0;
}
By the way, when I call _pSocket->readAll(), the entire packet is read, including 4-byte size and message itself.
EDIT :
void int32_to_char(uint32_t in, char* bytes) {
bytes[0] = (in >> 24) & 0xFF;
bytes[1] = (in >> 16) & 0xFF;
bytes[2] = (in >> 8) & 0xFF;
bytes[3] = in & 0xFF;
return;
}

As you are using the QByteArray QIODevice::read(qint64 maxSize) function, you may not be detecting errors correctly:
This function has no way of reporting errors; returning an empty QByteArray() can mean either that no data was currently available for reading, or that an error occurred.
Some things to try:
Use the qint64 QIODevice::read(char* data, qint64 maxSize) which reports errors:
If an error occurs ... this function returns -1.
Call QIODevice::errorString and QAbstractSocket::error to find out what is going wrong.
For bonus points, listen to the QAbstractSocket::error error signal.
If this is a new protocol you are creating, try using QDataStream for serialization, this automatically handles length prefixs and is platform independent. Your char_to_int32 will break if you mix platforms with different endienness, and may break between different OSs or compilers as int is not guaranteed to be 32 bits (it is defined as at least 16 bits).
If you can't use QDataStream, at least use the htons, ntohs ... functions.
Edit
Here is some example code showing hton/ntoh usage. Note that uint32_t and not int is used as it's guaranteed to be 32 bits. I've also used memcpy rather than pointer casts in the encode/decode to prevent aliasing and alignment problems (I've just done a cast in the test function for brevity).
#include <stdio.h>
#include <string.h>
#include <arpa/inet.h>
void encode(uint32_t in, char* out)
{
/* Host to Network long (32 bits) */
const uint32_t t = htonl(in);
memcpy(out, &t, sizeof(t));
}
uint32_t decode(char* in)
{
uint32_t t;
memcpy(&t, in, sizeof(t));
/* Network to Host long (32 bits) */
return ntohl(t);
}
void test(uint32_t v)
{
char buffer[4];
printf("Host Input: %08x\n", v);
encode(v, buffer);
printf("Network: %08x\n", *((uint32_t*)buffer));
printf("Host Output: %08x\n\n", decode(buffer));
}
int main(int argc, char** argv)
{
test(0);
test(1);
test(0x55);
test(0x55000000);
return 0;
}

Related

Specific Parameter Input

I am fairly new to C++ so forgive me for being all over the place with my code but here is whats going on, I am creating a Dynamic Link Library that will handle the decompression of my games assets. I am very familiar with lossless binary compression but here's whats happening, I need to know how I can have an argument either be "Type A" or "Type B" and nothing else, I am using visual studio so I would like the autocomplete hint to tell me I can either use "A" or "B" as the argument, How would I do this?
cpp
//People where telling me to add code for visual so here
static __declspec(dllexport) char* compress(char* buffer, "8bit Int" | "16bit Int" | "32bit Int", int Value)
{
char* bytes;
//Enter code to convert integer to bytes
strcat_s(bytes, sizeof(bytes) + sizeof(buffer), buffer);
return buffer;
}
Like this?
enum class Integer
{
UNKNOWN = 0,
Bit8 = 1,
Bit16 = 2,
Bit32 = 3,
};
static __declspec(dllexport) char* compress(
char* buffer, Integer intType, int Value)
{
char* bytes;
switch (intType)
{
case Integer::Bit8:
// 8-bits processing.
break;
case Integer::Bit16:
// 16-bits processing.
break;
case Integer::Bit32:
// 32-bits processing.
break;
}
//Enter code to convert integer to bytes
strcat_s(bytes, sizeof(bytes) + sizeof(buffer), buffer);
return buffer;
}
Then you call it this way:
compress(buf, Integer::Bit8, 42);
Does this look appropriate?
__declspec(dllexport) enum intType {
_8bit, _16bit, _32bit
};
class COMPRESS
{
public:
char* CreateBuffer(int Size)
{
char* buffer = new char[Size];
return buffer;
}
char* BufferWrite(char* Buffer, intType Type, int Value)
{
char* bytes;
switch (Type)
{
_8bit:
{
bytes = (char*)Value;
}
_16bit:
{
bytes[0] = Value & 0xff;
bytes[1] = (Value >> 8) & 0xff;
}
_32bit:
{
bytes[0] = Value & 0xff;
bytes[1] = (Value >> 8) & 0xff;
bytes[2] = (Value >> 16) & 0xff;
bytes[3] = (Value >> 24) & 0xff;
}
}
strcat_s(Buffer, sizeof(bytes) + sizeof(Buffer), bytes);
return Buffer;
}

C++ sending a packet over socket

I'm trying to communicate with a server program I installed. The server sends and receives all data in the form of constructed packets that follow the setup of:
int int int string nullbyte
like this:
little endian signed int -> The size of the ID (4 bytes) + size of Type (4 bytes) + size of Body (minimum 1 for null terminator) + null byte for a minimum of 10 as the value;
little endian signed int -> id
little endian signed int -> packet type
Null terminiated ascii string -> body
null byte
I've managed to read the packets just fine but when I try to send the packet with the password, the server completely ignores it which means the packet is wrong some how. I construct the packet like this:
void Packet::build(){
/*
* Create unsigned char vector to store
* the data while we build the byte array
* and create a pointer so the byte array can
* be modified by the required functions.
*/
std::vector<unsigned char> packet(m_size);
unsigned char *ptr = packet.data();
/*
* Convert each of the three integers as well
* as the string into bytes which will be stored
* back into the memory that ptr points to.
*
* Packet data follows format:
* int32 -> Size of ID + Server Data + body (minimum of 10).
* int32 -> ID of request. Used to match to response.
* int32 -> Server Data type. Identifies type of request.
* String -> Minimum of 1 byte for null terminator.
* String -> Null terminator.
*/
storeInt32Le(ptr, m_sizeInPacket);
storeInt32Le(ptr, m_requestID);
storeInt32Le(ptr, m_serverData);
storeStringNt(ptr, m_body);
/*
* Store the vector in member variable m_cmdBytes;
*
*/
m_cmdBytes = packet;
}
storeInt32Le:
void Packet::storeInt32Le(unsigned char* &buffer, int32_t value) {
/*
* Copy the integer to a byte array using
* bitwise AND with mask to ensure the right
* bits are copied to each segment then
* increment the pointer by 4 for the next
* iteration.
*/
buffer[0] = value & 0xFF;
buffer[1] = (value >> 8) & 0xFF;
buffer[2] = (value >> 16) & 0xFF;
buffer[3] = (value >> 24) & 0xFF;
buffer += 4;
}
storeStringNt:
void Packet::storeStringNt(unsigned char* &buffer, const string &s) {
/*
* Get the size of the string to be copied
* then do a memcpy of string char array to
* the buffer.
*/
size_t size = s.size() + 1;
memcpy(buffer, s.c_str(), size);
buffer += size;
}
And finally, I send it with:
bool Connection::sendCmd(Packet packet) {
unsigned char *pBytes = packet.bytes().data();
size_t size = packet.size();
while (size > 0){
int val = send(m_socket, pBytes, size, 0);
if (val <= 0) {
return false;
}
pBytes += val;
size -= val;
}
return true;
}
Packet::bytes() just returns m_cmdBytes
As stated in comments, you are:
making assumptions about the byte size and endian of your compiler's int data type. Since your protocol requires a very specific byte size and endian for its integers, you need to force your code to follow those requirements when preparing the packet.
using memcpy() incorrectly. You have the source and destination buffers reversed.
are not ensuring that send() is actually sending the full packet correctly.
Try something more like this:
void store_uint32_le(unsigned char* &buffer, uint32_t value)
{
buffer[0] = value & 0xFF;
buffer[1] = (value >> 8) & 0xFF;
buffer[2] = (value >> 16) & 0xFF;
buffer[3] = (value >> 24) & 0xFF;
buffer += 4;
}
void store_string_nt(unsigned char* &buffer, const std::string &s)
{
size_t size = s.size() + 1;
memcpy(buffer, s.c_str(), size);
buffer += size;
}
...
std::vector<unsigned char> packet(13 + m_body.size());
unsigned char *ptr = packet.data(); // or = &packet[0] prior to C++11
store_uint32_le(ptr, packet.size() - 4);
store_uint32_le(ptr, m_requestID);
store_uint32_le(ptr, m_serverData);
store_string_nt(ptr, m_body);
...
unsigned char *pBytes = packet.data(); // or = &packet[0] prior to C++11
size_t size = packet.size();
while (size > 0)
{
int val = send(m_socket, pBytes, size, 0);
if (val < 0)
{
// if your socket is non-blocking, enable this...
/*
#ifdef _WINDOWS // check your compiler for specific defines...
if (WSAGetLastError() == WSAEWOULDBLOCK)
#else
if ((errno == EAGAIN) || (errno == EWOULDBLOCK) || (errno == EINTR))
#endif
continue;
*/
return false;
}
if (val == 0)
return false;
pBytes += val;
size -= val;
}
return true;

Atmega8A uart spi eeprom

everyone, I want to write and store my string at spi eeprom, then read back from spi eeprom and display in terminal through uart. I already follow the step in [1]: http://ww1.microchip.com/downloads/en/DeviceDoc/21822E.pdf . But it seem that it can only display one letter. I don't know if the other letter is save in spi eeprom or not. I hope someone can help me.
I am using:
chip:Atmega8a
software:avr studio 5
terminal: Bray terminal.
#include <avr/io.h>
#include <util/delay.h>
void serial_init(void)
{
UBRRH = 0x00;
UBRRL = 95;
UCSRB = (1 << RXEN) | (1 << TXEN) | (1<<RXCIE);
UCSRC = (1<<URSEL)|(1<<USBS)|(3<<UCSZ0)|(1 << UCSZ1);
}
void SPI_MasterInit(void)
{
DDRB = 0b00101100;
DDR_SPI = (1<<DD_MOSI)|(1<<DD_SCK)|(1<<DD_SS);
SPCR = 0b01010000;
SPSR = 0b00000001;
}
char spi_transfer(volatile char data)
{
SPDR = data;
while(!(SPSR & (1<<SPIF)));
{
}
return SPDR;
}
void SPI_MasterTransmit(unsigned long data)
{
unsigned long address;
DDR_SPI &= ~(1<<DD_SS); //ss goes low
spi_transfer(WREN); //enable write operation
DDR_SPI |= (1<<DD_SS); //ss goes high
_delay_ms(10);
DDR_SPI &= ~(1<<DD_SS); //ss goes low
spi_transfer(WRITE); // write data to memory
spi_transfer(address>>8); // address MSB first
spi_transfer(address);
spi_transfer(data); // send lsb
DDR_SPI |= (1<<DD_SS); //ss goes high
}int resetEEPROM()
{
DDR_SPI &= ~(1<<DD_SS); // Select EEPROM
spi_transfer(WREN); // Send WRITE_ENABLE command
DDR_SPI |= (1<<DD_SS); // Release EEPROM
DDR_SPI &= ~(1<<DD_SS); // Select EEPROM again after WREN cmd
spi_transfer(WRDI); // send CHIP_ERASE command
DDR_SPI |= (1<<DD_SS); // Release EEPROM
return 0;
} // END eraseEEPROM()
unsigned long SPI_MasterReceive(unsigned long address) //terima data //read address
{
unsigned long data;
DDR_SPI &= ~(1<<DD_SS); //ss goes low
spi_transfer(READ); //enable write operation
spi_transfer(address>>8); // address MSB first
spi_transfer(address);
data = spi_transfer(0xff);
DDR_SPI |= (1<<DD_SS); //goes high
return data;
}
int main(void)
{
long int data;
unsigned long address;
serial_init();
SPI_MasterInit();
resetEEPROM();
data = Usart_Receive();
while (1)
{
if (Usart_Receive() == '.')
{
USART_Print("\r\nStore\r\n");
SPI_MasterTransmit(data); //store in spi eeprom
}
if (Usart_Receive() == '>')
{
USART_Print("\nout \r\n");
data = SPI_MasterReceive(address); //read data from the memory
Usart_Transmit(data);
}
}
return 0;
}
There is a way to write more than one byte to the EEPROM at once, but your code does not do that. Instead, you are writing one byte per write operation, and always at the same address. You are overwriting any previous bytes with each new one.
If you want to store more than one byte, you need to change the address as you write, or change the way you are writing to more than one byte at a time. (Note that you can only write multiple bytes if they are the same page of EEPROM memory.)
Perhaps a circular buffer?
Here are my Circular Buffer code. Based on this http://www.rn-wissen.de/index.php/UART_mit_avr-gcc
#include <avr/io.h>
#include <fifo.h>
#define FIFOBUF_SIZE 128
uint8_t fifobuf[FIFOBUF_SIZE];
fifo_t fifo;
ISR (USART_RXC_vect)
{
_inline_fifo_put(&fifo, UDR);
}
void serial_init(void)
{
cli();
UBRRH = 0x00;
UBRRL = 95;
UCSRB = (1 << RXCIE) | (1 << RXEN) | (1 << TXEN);
UCSRC = (1<<URSEL)|(1<<USBS)|(3<<UCSZ0);
sei();
}
void fifo_init (fifo_t *f, uint8_t * buffer, const uint8_t size)
{
f-> count = 0;
f-> pread = f-> pwrite = buffer;
f-> read2end = f-> write2end = f-> size = size;
}
static inline int Usart_Transmit (const uint8_t c)
{
PORTD= 0b00000100; //RTS Enable
while ((UCSRA & (1 << UDRE)) ==0) {};
UDR = c;
PORTD= 0b00000000; //RTS Enable
return 1;
}
int main(void)
{
unsigned long data;
unsigned long address;
fifo_init(&fifo, fifobuf, FIFOBUF_SIZE);
serial_init();
while (1)
{
SPI_MasterInit();
resetEEPROM();
SPI_MasterTransmit(Usart_Receive());
_delay_ms(100);
if (fifo.count > 0) //; fifo.count >8 ; fifo.count
{
Usart_Transmit(_inline_fifo_get(&fifo));
}
data = SPI_MasterReceive(address); //read data from the memory
_delay_ms(100);
Usart_Transmit(data);
}
return 0;
}
it Came out all of the letter, but not follow the sequence. Example like this " bfabeaabbfcabf ", I am only type " abcdef "
And can you show me how to set the EEPROM address in spi EEPROM. Like e.g. show me some link or example about this spi EEPROM address. I ask for your Kindness to help me about this because I have been about 2 months searching on the internet, there only few examples on how to handle spi EEPROM address. Mostly I just found about ATmega EEPROM, LTD. And all of Them are not give me a good result. Thank in advance for your time. :)

Read "varint" from linux sockets

I need to read a VarInts from linux sockets in C/C++. Any library, idea or something?
I tried reading and casting char to bool[8] to try without success to read a VarInt...
Also, this is for compatibility with new Minecraft 1.7.2 communication protocol, so, the documentation of the protocol may also help.
Let me explain my project: I'm making a Minecraft server software to run in my VPS (because java is too slow...) and I got stuck with the protocol. One thread waits for the connections and when it has a new connection, it creates a new Client object and starts the Client thread that starts communicating with the client.
I think that there is no need to show code. In case I'm wrong, tell me and I'll edit with some code.
First off, note that varints are sent as actual bytes, not strings of the characters 1 and 0.
For an unsigned varint, I believe the following will decode it for you, assuming you've got the varint data in a buffer pointed to by data. This example function returns the number of bytes decoded in the reference argument int decoded_bytes.
uint64_t decode_unsigned_varint( const uint8_t *const data, int &decoded_bytes )
{
int i = 0;
uint64_t decoded_value = 0;
int shift_amount = 0;
do
{
decoded_value |= (uint64_t)(data[i] & 0x7F) << shift_amount;
shift_amount += 7;
} while ( (data[i++] & 0x80) != 0 );
decoded_bytes = i;
return decoded_value;
}
To decode a signed varint, you can use this second function that calls the first:
int64_t decode_signed_varint( const uint8_t *const data, int &decoded_bytes )
{
uint64_t unsigned_value = decode_unsigned_varint(data, decoded_bytes);
return (int64_t)( unsigned_value & 1 ? ~(unsigned_value >> 1)
: (unsigned_value >> 1) );
}
I believe both of these functions are correct. I did some basic testing with the code below to verify a couple datapoints from the Google page. The output is correct.
#include <stdint.h>
#include <iostream>
uint64_t decode_unsigned_varint( const uint8_t *const data, int &decoded_bytes )
{
int i = 0;
uint64_t decoded_value = 0;
int shift_amount = 0;
do
{
decoded_value |= (uint64_t)(data[i] & 0x7F) << shift_amount;
shift_amount += 7;
} while ( (data[i++] & 0x80) != 0 );
decoded_bytes = i;
return decoded_value;
}
int64_t decode_signed_varint( const uint8_t *const data, int &decoded_bytes )
{
uint64_t unsigned_value = decode_unsigned_varint(data, decoded_bytes);
return (int64_t)( unsigned_value & 1 ? ~(unsigned_value >> 1)
: (unsigned_value >> 1) );
}
uint8_t ex_p300[] = { 0xAC, 0x02 };
uint8_t ex_n1 [] = { 0x01 };
using namespace std;
int main()
{
int decoded_bytes_p300;
uint64_t p300;
p300 = decode_unsigned_varint( ex_p300, decoded_bytes_p300 );
int decoded_bytes_n1;
int64_t n1;
n1 = decode_signed_varint( ex_n1, decoded_bytes_n1 );
cout << "p300 = " << p300
<< " decoded_bytes_p300 = " << decoded_bytes_p300 << endl;
cout << "n1 = " << n1
<< " decoded_bytes_n1 = " << decoded_bytes_n1 << endl;
return 0;
}
To encode varints, you could use the following functions. Note that the buffer uint8_t *const data should have room for at least 10 bytes, as the largest varint is 10 bytes long.
#include
// Encode an unsigned 64-bit varint. Returns number of encoded bytes.
// 'buffer' must have room for up to 10 bytes.
int encode_unsigned_varint(uint8_t *const buffer, uint64_t value)
{
int encoded = 0;
do
{
uint8_t next_byte = value & 0x7F;
value >>= 7;
if (value)
next_byte |= 0x80;
buffer[encoded++] = next_byte;
} while (value);
return encoded;
}
// Encode a signed 64-bit varint. Works by first zig-zag transforming
// signed value into an unsigned value, and then reusing the unsigned
// encoder. 'buffer' must have room for up to 10 bytes.
int encode_signed_varint(uint8_t *const buffer, int64_t value)
{
uint64_t uvalue;
uvalue = uint64_t( value < 0 ? ~(value << 1) : (value << 1) );
return encode_unsigned_varint( buffer, uvalue );
}

Passing int via winsocks send

How could it be possible to send int (not using third party libraries) via windows sockets Send:
it requires (const char *) as parameter.
My attempt was to send int like this:
unsigned char * serialize_int(unsigned char *buffer, int value)
{
/* Write big-endian int value into buffer; assumes 32-bit int and 8-bit char. */
buffer[0] = value >> 24;
buffer[1] = value >> 16;
buffer[2] = value >> 8;
buffer[3] = value;
return buffer + 4;
}
but Send() wants (const char *). I'm stuck...
Ah, this is easily fixed. The compiler wants a char* (ignore const for now), but you're passing an unsigned char*, and the only real difference is how the compiler interprets each byte when manipulated. Therefore you can easily cast the pointer from unsigned char* to char*. This way:
(char*)serialize_int(...)
const int networkOrder = htonl(value);
const result = send(socket, reinterpret_cast<const char *>(&networkOrder), sizeof(networkOrder), 0);