ESP8266 getting garbage data from char array - c++

I'm honestly at a loss here. I'm trying to store an SSID and password that the user sends through a post request in the flash EEPROM section. To do that I convert the data sent from the post request to a char array and index it to EEPROM. The SSID runs without any problems, but the password always ends up with junk data before it even gets to EEPROM.
Here is the bit of code in question:
// Recieve data from the HTTP server
void changeConfig(String parameter, String value){
int memoffset = 0;
if(parameter == "ssid")
memoffset = 0;
else if(parameter == "pass")
memoffset = 32;
else
return;
#ifdef DEBUG
Serial.println("Updating Data");
Serial.print("Param: ");
Serial.println(parameter);
Serial.print("Value: ");
Serial.println(value);
#endif
EEPROM.begin(64);
char _data[sizeof(value)];
value.toCharArray(_data, sizeof(value));
for(int i = memoffset; i < memoffset + sizeof(value); i++)
{
#ifdef DEBUG
Serial.print("addr ");
Serial.print(i);
Serial.print(" data ");
Serial.println(_data[i]);
#endif
EEPROM.write(i,_data[i]);
}
EEPROM.end();
}
And the Serial monitor output:
Post parameter: ssid, Value: NetworkName
Updating Data
Param: ssid
Value: NetworkName
addr 0 data N
addr 1 data e
addr 2 data t
addr 3 data w
addr 4 data o
addr 5 data r
addr 6 data k
addr 7 data N
addr 8 data a
addr 9 data m
addr 10 data e
addr 11 data ␀
Post parameter: pass, Value: Networkpass
Updating Data
Param: pass
Value: Networkpass
addr 32 data |
addr 33 data (
addr 34 data �
addr 35 data ?
addr 36 data L
addr 37 data ␛
addr 38 data �
addr 39 data ?
addr 40 data ␁
addr 41 data ␀
addr 42 data ␀
addr 43 data ␀
As you can see, when the name of the POST parameter is ssid, it works alright. With pass on the other hand, the char array is just filled with gibberish. Any insight would be helpful. I'm using platformio in the arduino environment. Generic ESP01 with 1M of flash.
Thanks in advance.

You have two problems with your code.
First, you are using sizeof incorrectly. Sizeof returns the size of the String object, but you are trying to get the length of the contained string. Sizeof is not the right tool for that, instead you should use whatever API String offers to read the size of the string.
The next problem is your usage of offsets. The following code snippet is all wrong:
char _data[sizeof(value)];
value.toCharArray(_data, sizeof(value));
for(int i = memoffset; i < memoffset + sizeof(value); i++)
{
...
EEPROM.write(i,_data[i]);
Your i starts with offset of 32, so you are trying to access element with index 32 in your _data array. But _data stores characters starting from the index 0, and since the length of array is actually 12 (sizeof of String is always 12) by accessing element with index 32 you are going beyond it's bounds, and obviously find garbage there (in C++ parlance, it is called undefined behavior).
Last, but not the least, C++ is an extremely complicated language, which can't be learned by 'trial and error'. Instead, you need to methodically study, preferably using one of the good C++ books. The list of those can be found here: The Definitive C++ Book Guide and List

You're using sizeof() incorrectly.
sizeof() tells you the size of the object, at compile time.
Try this experiment - run this code:
#include <Arduino.h>
void setup() {
String x("");
String y("abc");
String z("abcdef");
Serial.begin(115200);
delay(1000);
Serial.println(sizeof(x));
Serial.println(sizeof(y));
Serial.println(sizeof(z));
}
void loop() {
}
On my ESP8266 this outputs:
12
12
12
That's because it takes 12 bytes using this development environment to represent a String object (it might be different on a different CPU and compiler). The String class dynamically allocates storage, so sizeof can tell you nothing about how long the string itself is, only the compile-time size of the object.
For the String class, you should use its length() method. Your lines:
char _data[sizeof(value)];
value.toCharArray(_data, sizeof(value));
for(int i = memoffset; i < memoffset + sizeof(value); i++)
should be written as
char _data[value.length()];
value.toCharArray(_data, value.length());
for(int i = memoffset; i < memoffset + value.length(); i++)
For more information see the documentation on the String class.
You'll likely still have issues with string terminators. C and C++ terminate char array strings with the null character '\0', adding an extra byte to the length of the strings. So your code should more likely be:
void changeConfig(String parameter, String value){
int memoffset = 0;
if(parameter == "ssid")
memoffset = 0;
else if(parameter == "pass")
memoffset = 33;
else
return;
#ifdef DEBUG
Serial.println("Updating Data");
Serial.print("Param: ");
Serial.println(parameter);
Serial.print("Value: ");
Serial.println(value);
#endif
EEPROM.begin(66);
char _data[value.length() + 1];
value.toCharArray(_data, value.length() + 1);
for(int i = memoffset; i < memoffset + value.length() + 1; i++)
{
#ifdef DEBUG
Serial.print("addr ");
Serial.print(i);
Serial.print(" data ");
Serial.println(_data[i]);
#endif
EEPROM.write(i,_data[i]);
}
EEPROM.end();
}
to allow the string terminators to work correctly for 32 character SSIDs and passwords. But the fundamental issue that's breaking your code is the incorrect use of sizeof.

Related

8b10b encoder with byte stream output (bits carry): faster bitwise algorithm?

I have written a 8b10b encoder that generates a stream of bytes intended to be sent to a serial transmitter which sends the bytes as-is LSb first.
What I'm doing here is basically lay down groups of 10 bits (encoded from the input stream of bytes) on groups of 8, so a varying number of bits get carried over from one output byte to the next - kind of like in music/rhythm.
The program has been successfully tested, but it is about 4-5x too slow for my application. I think it comes from the fact that every bit has to be looked up in an array. My guts tell me we could make that faster by having some sort of rolling mask but I can't yet see how to do that even by swapping out the 3d array of booleans to a 2D array of integers.
Any pointer or other idea?
Here is the code. Please ignore most of the macros and some of the code related to deciding which byte is to be written as this is application-specific.
Header:
#ifndef TX_BYTESTREAM_GEN_H_INCLUDED
#define TX_BYTESTREAM_GEN_H_INCLUDED
#include <stdint.h> //for standard portable types such as uint16_t
#define MAX_USB_TRANSFER_SIZE 1016 //Bytes, size of the max payload in a USB transaction. Determined using FT4222_GetMaxTRansferSize()
#define MAX_USB_PACKET_SIZE 62 //Bytes, max size of the payload of a single USB packet
#define MANDATORY_TX_PACKET_BLOCK 5 //Bytes, constant - equal to the minimum number of bytes of TX packet necessary to exactly transfer blocks of 10 bits of encoded data (LCF of 8 and 10)
#define SYNC_CHARS_MAX_INTERVAL 172 //Target number of payload bytes between sync chars. Max is 188 before desynchronisation
#define ROUND_UP(N, S) ((((N) + (S) - 1) / (S)) * (S)) //Macro to round up the integer N to the largest multiple of the integer S
#define ROUND_DOWN(N,S) ((N / S) * S) //Same rounding down
#define N_SYNC_CHAR_PAIRS_IN_PCKT(pcktSz) (ROUND_UP((pcktSz*1000/(SYNC_CHARS_MAX_INTERVAL+2)),1000)/1000) //Number of sync (K28.5) character/byte pairs in a given packet
#define TX_PAYLOAD_SIZE(pcktSz) ((pcktSz*4/5)-2*N_SYNC_CHAR_PAIRS_IN_PCKT(pcktSz)) //Size in bytes of the payload data before encoding in a single TX packet
#define MAX_TX_PACKET_SIZE (ROUND_DOWN((MAX_USB_TRANSFER_SIZE-MAX_USB_PACKET_SIZE),(MAX_USB_PACKET_SIZE*MANDATORY_TX_PACKET_BLOCK))) //Maximum size in bytes of a TX packet
#define DEFAULT_TX_PACKET_SIZE (MAX_TX_PACKET_SIZE-MAX_USB_PACKET_SIZE*MANDATORY_TX_PACKET_BLOCK) //Default size in bytes of a TX packet with some margin
#define MAX_TX_PAYLOAD_SIZE (TX_PAYLOAD_SIZE(MAX_TX_PACKET_SIZE)) //Maximum size in bytes of the payload in a TX packet
#define DEFAULT_TX_PAYLOAD_SIZE (TX_PAYLOAD_SIZE(DEFAULT_TX_PACKET_SIZE))//Default size in bytes of the payload in a TX packet with some margin
//See string descriptors below for definitions. Error codes are individual bits so can be combined.
enum ErrCode
{
NO_ERR = 0,
INVALID_DIN_SIZE = 1,
INVALID_DOUT_SIZE = 2,
NULL_DIN_PTR = 4,
NULL_DOUT_PTR = 8
};
char const * const ERR_CODE_DESC[] = {
"No error",
"Invalid size of input data",
"Invalid size of output buffer",
"Input data pointer is NULL",
"Output buffer pointer is NULL"
};
/** #brief Generates the bytestream to the transmitter by encoding the incoming data using 8b10b encoding
and inserting K28.5 synchronisation characters to maintain the synchronisation with the demodulator (LVDS passthrough mode)
#arg din is a pointer to an allocated array of bytes which contains the data to encode
#arg dinSize is the size of din in bytes. This size must be equal to TX_PAYLOAD_SIZE(doutSize)
#arg dout is a pointer to an allocated array of bytes which is intended to contain the output bytestream to the transmitter
#arg doutSize is the size of dout in bytes. This size must meet the conditions at the top of this function's implementation. Use DEFAULT_TX_PACKET_SIZE if in doubt.
#return error code (c.f. ErrCode) **/
int TX_gen_bytestream(uint8_t *din, uint16_t dinSize, uint8_t *dout, uint16_t doutSize);
#endif // TX_BYTESTREAM_GEN_H_INCLUDED
Source file:
#include "TX_bytestream_gen.h"
#include <cstddef> //NULL
#define N_BYTE_VALUES (256+1) //256 possible data values + 1 special character (only accessible to this module)
#define N_ENCODED_BITS 10 //Number of bits corresponding to the 8b10b encoding of a byte
//Map the current running disparity, the desired value to encode to the array of encoded bits for 8b10b encoding.
//The Last value is the K28.5 sync character, only accessible to this module
//Notation = MSb to LSb
bool const encodedBits[2][N_BYTE_VALUES][N_ENCODED_BITS] =
{
//Long table (see appendix)
};
//New value of the running disparity after encoding with the specified previous running disparity and requested byte value (c.f. above)
bool const encodingDisparity[2][N_BYTE_VALUES] =
{
//Long table (see appendix)
};
int TX_gen_bytestream(uint8_t *din, uint16_t dinSize, uint8_t *dout, uint16_t doutSize)
{
static bool RDp = false; //Running disparity is initially negative
int ret = 0;
//If the output buffer size is not a multiple of the mandatory payload block or of the USB packet size, or if it cannot be held in a single USB transaction
//return an invalid output buffer size error
if(doutSize == 0 || (doutSize % MANDATORY_TX_PACKET_BLOCK) || (doutSize % MAX_USB_PACKET_SIZE) || (doutSize > MAX_TX_PACKET_SIZE)) //Temp
ret |= INVALID_DOUT_SIZE;
//If the input data size is not consistent with the output buffer size, return the appropriate error code
if(dinSize == 0 || dinSize != TX_PAYLOAD_SIZE(doutSize))
ret |= INVALID_DIN_SIZE;
if(din == NULL)
ret |= NULL_DIN_PTR;
if(dout == NULL)
ret |= NULL_DOUT_PTR;
//If everything checks out, carry on
if(ret == NO_ERR)
{
uint16_t iByteIn = 0; //Index of the byte of input data currently being processed
uint16_t iByteOut = 0; //Index of the output byte currently being written to
uint8_t iBitOut = 0; //Starts with LSb
int16_t nBytesUntilSync = 0; //Countdown of bytes until a sync marker needs to be sent. Cyclic.
//For all output bytes to generate
while(iByteOut < doutSize)
{
bool sync = false; //Initially this byte is not considered a sync byte (in which case the next byte of data will be processed)
//If the maximum interval between sync characters has been reached, mark the two next bytes as sync bytes and reset the counter
if(nBytesUntilSync <= 0)
{
sync = true;
if(nBytesUntilSync == -1) //After the second SYNC is written, the counter is reset
{
nBytesUntilSync = SYNC_CHARS_MAX_INTERVAL;
}
}
//Append bit by bit the encoded data of the byte to write to the output bitstream (carried over from byte to byte) - LSb first
//The byte to write is either the last byte of the encodedBits map (the sync character K28.5) if sync is set, or the next byte of
//input data if it isn't
uint16_t const byteToWrite = (sync?(N_BYTE_VALUES-1):din[iByteIn]);
for(int8_t iEncodedBit = N_ENCODED_BITS-1 ; iEncodedBit >= 0 ; --iEncodedBit, iBitOut++)
{
//If the current output byte is complete, reset the bit index and select the next one
if(iBitOut >= 8)
{
iByteOut++;
iBitOut = 0;
}
//Effectively sets the iBitOut'th bit of the iByteOut'th byte out to the encoded value of the byte to write
bool bitToWrite = encodedBits[RDp][byteToWrite][iEncodedBit]; //Temp
dout[iByteOut] ^= (-bitToWrite ^ dout[iByteOut]) & (1 << iBitOut);
}
//The running disparity is also updated as per the standard (to achieve DC balance)
RDp = encodingDisparity[RDp][byteToWrite]; //Update the running disparity
//If sync was not set, this means a byte of the input data has been processed, in which case take the next one in
//Also decrement the synchronisation counter
if(!sync) {
iByteIn++;
}
//In any case, decrease the synchronisation counter. Even sync characters decrease it (c.f. top of while loop)
nBytesUntilSync--;
}
}
return ret;
}
Testbench:
#include <iostream>
#include "TX_bytestream_gen.h"
#define PACKET_DURATION 0.000992 //In seconds, time of continuous data stream corresponding to one packet (5MHz output, default packet size)
#define TIME_TO_SIMULATE 10 //In seconds
#define PACKET_SIZE DEFAULT_TX_PACKET_SIZE
#define PAYLOAD_SIZE DEFAULT_TX_PAYLOAD_SIZE
#define N_ITERATIONS (TIME_TO_SIMULATE/PACKET_DURATION)
#include <chrono>
using namespace std;
//Testbench: measure the time taken to simulate TIME_TO_SIMULATE seconds of continuous encoding
int main()
{
uint8_t toEncode[PAYLOAD_SIZE] = {100}; //Dummy data, doesn't matter
uint8_t out[PACKET_SIZE] = {0};
std::chrono::time_point<std::chrono::system_clock> start, end;
start = std::chrono::system_clock::now();
for(unsigned int i = 0 ; i < N_ITERATIONS ; i++)
{
TX_gen_bytestream(toEncode, PAYLOAD_SIZE, out, PACKET_SIZE);
}
end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed_seconds = end - start;
std::cout << "Task execution time: " << elapsed_seconds.count()/TIME_TO_SIMULATE*100 << "% (for " << TIME_TO_SIMULATE << "s simulated)\n";
return 0;
}
Appendix: lookup tables. I don't have enough characters to paste it here, but it looks like so:
bool const encodedBits[2][N_BYTE_VALUES][N_ENCODED_BITS] =
{
//Running disparity = RD-
{
{1,0,0,1,1,1,0,1,0,0},
//...
},
//Running disparity = RD+
{
{0,1,1,0,0,0,1,0,1,1},
//...
}
};
bool const encodingDisparity[2][N_BYTE_VALUES] =
{
//Previous running disparity was RD-
{
0,
//...
},
//Previous running disparity was RD+
{
1,
//...
}
};
This will be a lot faster if you do everything a byte at time instead of a bit at a time.
First change the way you store your lookup tables. You should have something like:
// conversion from (RD, byte) to (RD, 10-bit code)
// in each word, the lower 10 bits are the code,
// and bit 10 (the 11th bit) is the new RD
// The first 256 values are for RD -1, the next
// for RD 1
static const uint16_t BYTE_TO_CODE[512] = {
...
}
Then you need to change our encoding loop to write a byte at a time. You can use a uint16_t to store the leftover bits from each byte you output.
Something like this (I didn't figure out your sync byte logic, but presumably you can put that in the input or output byte loop):
// returns next isRD1
bool TX_gen_bytestream(uint8_t *dest, const uint8_t *src, size_t src_len, bool isRD1)
{
// bits generated, but not yet written, LSB first
uint16_t bits = 0;
// number of bits in bits
unsigned numbits = 0;
// current RD, either 0 or 256
uint16_t rd = isRD1 ? 256 : 0;
for (const uint8_t *end = src + src_len; src < end; ++src) {
// lookup code and next rd
uint16_t code = BYTE_TO_CODE[rd + *src];
// new rd from code bit 10
rd = (code>>2) & 256;
// store bits
bits |= (code & (uint16_t)0x03FF) << numbits;
numbits+=10;
// write out any complete bytes
while(numbits >= 8) {
*dest++ = (uint8_t)bits;
bits >>=8;
numbits-=8;
}
}
// If src_len isn't divisible by 4, then we have some extra bits
if (numbits) {
*dest = (uint8_t)bits;
}
return !!rd;
}

stm32f746 flash read after write returns null

I am saving settings to the flash memory and reading them back again. 2 of the values always comes back empty. However, the data IS written to flash, since after a reset the values read are the new saved values and not empty.
I started experiencing this problem after I did some code-refactoring after taking the code over from another company.
Saving and reading the settings back works when you actually do the following (old inefficient way):
save setting 0 - read setting 0
save setting 1 - read setting 1
...
save setting 13 read setting 13
This is EXTREMELY inefficient and slow since the same page with all the settings are read from flash, the whole block of flash cleared, the new setting put into the read buffer and then the whole block (with only 1 changed setting) are written to flash. And this happens for all 14 settings!! But it works ...
unsigned char Save_One_Setting(unsigned char Setting_Number, unsigned char* value, unsigned char length)
{
/* Program the user Flash area word by word
(area defined by FLASH_USER_START_ADDR and FLASH_USER_END_ADDR) ***********/
unsigned int a;
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
buf[a++] = *(__IO uint32_t *)Address;
Address = Address + 4;
}
memset(&buf[Setting_Number * 60], 0, 60); // Clear setting value
memcpy(&buf[Setting_Number * 60], &value[0], length); // Set setting value
Erase_User_Flash_Memory();
HAL_FLASH_Unlock();
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
if (HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD, Address, buf[a++]) == HAL_OK)
{
Address = Address + 4;
}
else
{
/* Error occurred while writing data in Flash memory.
User can add here some code to deal with this error */
while (1)
{
/* Make LED1 blink (100ms on, 2s off) to indicate error in Write operation */
BSP_LED_On(LED1);
HAL_Delay(100);
BSP_LED_Off(LED1);
HAL_Delay(2000);
}
}
}
/* Lock the Flash to disable the flash control register access (recommended
to protect the FLASH memory against possible unwanted operation) *********/
HAL_FLASH_Lock();
}
I changed this by actually, after reading the settings from the flash into a buffer, update all the changed settings in the buffer, then erase the flash block and write the buffer back to flash. Downside: my first and 4th values always comes back as NULL after saving this buffer to flash.
However, after a system reset the correct values are read from flash.
unsigned char Save_Settings(Save_Settings_struct* newSettings)
{
/* Program the user Flash area word by word
(area defined by FLASH_USER_START_ADDR and FLASH_USER_END_ADDR) ***********/
unsigned int a;
unsigned char readBack[60];
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
buf[a++] = *(__IO uint32_t *)Address;
Address = Address + 4;
}
a = 0;
while (a < S_MAXSETTING)
{
if (newSettings[a].settingNumber < S_MAXSETTING)
{
memset(&buf[a * 60], 0, 60); // Clear setting value
memcpy(&buf[a * 60], newSettings[a].settingValue, newSettings[a].settingLength); // Set setting value
}
++a;
}
Erase_User_Flash_Memory();
HAL_FLASH_Unlock();
Address = FLASH_USER_START_ADDR;
a = 0;
while (Address < FLASH_USER_END_ADDR)
{
if (HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD, Address, buf[a++]) == HAL_OK)
{
Address = Address + 4;
}
else
{
/* Error occurred while writing data in Flash memory.
User can add here some code to deal with this error */
while (1)
{
/* Make LED1 blink (100ms on, 2s off) to indicate error in Write operation */
BSP_LED_On(LED1);
HAL_Delay(100);
BSP_LED_Off(LED1);
HAL_Delay(2000);
}
}
}
/* Lock the Flash to disable the flash control register access (recommended
to protect the FLASH memory against possible unwanted operation) *********/
HAL_FLASH_Lock();
}
I started playing around with cleaning and invalidating the data cache. At least the 2 values are not NULL anymore, however, they are still the old values. All other values are the new, saved values. Do a reset, and all values are correct.
Anybody ever had some similar problem? Or maybe an idea of what I can try to get rid of this problem?

Arduino accessing array produces unexpected resuslts

I am trying to read the length of some strings in flash memory on my Arduino UNO. The array string_table is giving me problems, if I get the index of it with something the compiler optimises to a constant, then I get the expected value. If I access it with something that is a variable at run time, then I get a different answer each time I do so.
I don't think this is specific to Arduino, as I don't seem to be calling any Arduino specific functionality.
Code:
#include <avr/pgmspace.h>
// Entries stored in flash memory
const char entry_0[] PROGMEM = "12345";
const char entry_1[] PROGMEM = "abc";
const char entry_2[] PROGMEM = "octagons";
const char entry_3[] PROGMEM = "fiver";
// Pointers to flash memory
const char* const string_table[] PROGMEM = {entry_0, entry_1, entry_2, entry_3};
void setup() {
Serial.begin(115200);
randomSeed(analogRead(0));
int r = random(4);
Serial.print("random key (r) : ");
Serial.println(r);
Serial.print("Wrong size for key = ");
Serial.print(r);
Serial.print(" : ");
Serial.println(strlen_P(string_table[r]));
int i = 1;
Serial.print("Correct size for key = ");
Serial.print(i);
Serial.print(" : ");
Serial.println(strlen_P(string_table[i]));
Serial.println("=====");
Serial.println("Expected Sizes: ");
Serial.print("0 is: ");
Serial.println(strlen_P(string_table[0]));
Serial.print("1 is: ");
Serial.println(strlen_P(string_table[1]));
Serial.print("2 is: ");
Serial.println(strlen_P(string_table[2]));
Serial.print("3 is: ");
Serial.println(strlen_P(string_table[3]));
Serial.println("++++++");
Serial.println("Wrong Sizes: ");
for (i = 0; i < 4; i++) {
Serial.print(i);
Serial.print(" is: ");
Serial.println(strlen_P(string_table[i]));
}
Serial.println("------");
delay(500);
}
void loop() {
// put your main code here, to run repeatedly:
}
Results:
random key (r) : 1
Wrong size for key = 1 : 16203
Correct size for key = 1 : 3
=====
Expected Sizes:
0 is: 5
1 is: 3
2 is: 8
3 is: 5
++++++
Wrong Sizes:
0 is: 0
1 is: 11083
2 is: 3
3 is: 3
------
From avr-libc - source code:
strlen_P() is implemented as an inline function in the avr/pgmspace.h
header file, which will check if the length of the string is a
constant and known at compile time. If it is not known at compile
time, the macro will issue a call to __strlen_P() which will then
calculate the length of the string as normal.
That could explain it (although the call to __strlen_P() should have fixed that).
Maybe use the following instead:
strlen_P((char*)pgm_read_word(&(string_table[i])));
Or maybe try using strlen_PF().

How to copy Buffer bytes block in Poco C++?

Hi i am trying to write a TCP connection in poco. the client sends a packet with this fields :
packetSize : int
date : int
ID : int
so the first 4 bytes contains the packet size. in the receive side i have this code :
int packetSize = 0;
char *bufferHeader = new char[4];
// receive 4 bytes that contains packetsize here
socket().receiveBytes(bufferHeader, sizeof(bufferHeader), MSG_WAITALL);
Poco::MemoryInputStream *inStreamHeader = new Poco::MemoryInputStream(bufferHeader, sizeof(bufferHeader));
Poco::BinaryReader *BinaryReaderHeader = new Poco::BinaryReader(*inStreamHeader);
(*BinaryReaderHeader) >> packetSize; // now we have the full packet size
Now I am trying to store all remaining incoming bytes into one array for future binary reading :
int ID = 0;
int date = 0;
int availableBytes = 0;
int readedBytes = 0;
char *body = new char[packetSize - 4];
do
{
char *bytes = new char[packetSize - 4];
availableBytes = socket().receiveBytes(bytes, sizeof(bytes), MSG_WAITALL);
if (availableBytes == 0)
break;
memcpy(body + readedBytes, bytes, availableBytes);
readedBytes += availableBytes;
} while (availableBytes > 0);
Poco::MemoryInputStream *inStream = new Poco::MemoryInputStream(body, sizeof(body));
Poco::BinaryReader *BinaryReader = new Poco::BinaryReader(*inStream);
(*BinaryReader) >> date;
(*BinaryReader) >> ID;
cout << "date :" << date << endl;
cout << "ID :" << ID << endl;
the problem is the byte block of body is not storing the remaining bytes , it has always only the first 4 bytes (date). so in the out put the date is correct but the ID is not as expected. I tried to Stream it without Block copy and manually receive the each field without loop, it was just fine and had expected data. but when i try to store the incoming bytes into one array and then pass that array to a memorystream to read it, i have only the first block correct and expected!!
I really need to store all incoming bytes into one array and then read whole that array, how should i change my code?
thanks alot
I see two errors in your code. Or, more precisely, an error you do twice.
You confuse sizeof of char[] with sizeof of char *; the first is the number of characters in the array, the second is the size of the pointer: typically 4 or 8 bytes, depending on the memory model.
So, when you write
availableBytes = socket().receiveBytes(bytes, sizeof(bytes), MSG_WAITALL);
you are asking for 4 (I suppose) bytes. This is not serious as you continue to ask other bytes until the message is finished.
The real problem is the following instruction
Poco::MemoryInputStream *inStream = new Poco::MemoryInputStream(body, sizeof(body));
where you transfer only sizeof(char *) bytes in inStream
You should substitute sizeof(body) and sizeof(bytes) with packetSize - 4.
P.s.: sorry for my bad english
Edit: I've seen another error. In this instruction
char *bytes = new char[packetSize - 4];
you allocate packetSize - 4 chars. This memory in never deleted and in allocated in the do ... while() cycle.
You can allocate bytes outside of the cycle (togheter with body).
Edit 2016.03.17
Proposed solution (caution: non tested)
size_t toRead = packetSize - 4U;
size_t totRead = 0U;
size_t nowRead;
char * body = new char[toRead];
do
{
nowRead += socket().receiveBytes(body+totRead, toRead-totRead,
MSG_WAITALL);
if ( 0 == nowRead )
throw std::runtime_error("shutdown from receiveBytes()");
totRead += nowRead;
} while ( totRead < toRead );
Poco::MemoryInputStream *inStream = new Poco::MemoryInputStream(body,
toRead);
delete[] body;
body = NULL;

C++ What's the proper way to receive data over tcp until I get as many as I want [duplicate]

I am trying to send a message over Socket in c++. I have read many questions on stack overflow related to this but couldn't still figure out how it works. lets say i am sending following characters(M,a,r,t,i,n) to a local host server, people suggest that you can use 4 bytes as the length(i.e 32 bits, so that it can handle a message up to 4GB length).
I did the same thing at my client side but still dont know how can i figure out this thing at server side whether i want to receive only starting 3 bytes(M,a,r) or last 3 bytes(t,i,n) of my data.
I am posting my code please help me mainly in the server side, will be thankfull if can write few lines with relevance to code.
Client side code
std::vector<char> userbuffer(20);
std::cout<<"\nclient:"<<std::endl;
char* p = userbuffer.data();
*p = 'M';
++p; *p = 'a';
++p; *p = 'r';
++p; *p = 't';
++p; *p = 'i';
++p; *p = 'n';
size_t length = strlen(userbuffer.data());
uint32_t nlength = htonl(length);
//line containg message length information
int header_info = send(socketFD, (char*)&nlength, 4, 0);
// Data bytes send to the server
int bytes_sent = send(socketFD, userbuffer.data(), length, 0);
if(bytes_sent == SOCKET_ERROR){ //some errror handling}
Server Side Code
char receivebuffer[MAX_DATA] = { '\0' };
int bytesReceivedFromClientMsg = 1;
int length_bytes = 0;
uint32_t length, nlength;
//code to check length if we have received whole data length
while(length_bytes < 4){
int read = recv(clientSocket, ((char*)&nlength)+length_bytes, (4-length_bytes), 0);
if (read == -1) { //error handling}
length_bytes += read;}
// Most painfull section to understand.
// I implemented this code from some ideas on internet
//but still cant find how its extracting length and what i am reading :(
while(bytesReceivedFromClientMsg > 0){
int msgheader = recv(clientSocket,(char*)&nlength,6, 0);
length = ntohl(nlength);//leng value here is in severel thousand size
char *receivebuffer = new char(length+1);
bytesReceivedFromClientMsg = recv(clientSocket, receivebuffer, msgheader, 0);
receivebuffer[length] = 0 ;
std::cout<<"msg header is :"<<msgheader<<std::endl;
std::cout<<"msg data is :"<<bytesReceivedFromClientMsg<<std::endl;
if(bytesReceivedFromClientMsg == SOCKET_ERROR){//some error handling}
You need a design for your network protocol. There are protocols like SMTP that are text-like protocols. You have to read characters until you find a termination character like the new-line in a text-like protocol.
With a message based protocol you have better chances for high performance protocol. You define a header (that is used in your code but not defined). In the header you put information about the length and probably about the type of the next message. Then you send the header in front of the message body. The body is "Martin" in your example.
The receiver has a state "header received". When the header is not received complete (or nothting at all) it will use the size of the header as chunk size. It receives chunksize bytes into the header variable. When the header is received complete the receiver sets the chunksize to the sized the is set in the header and receives so many bytes to the payload buffer. When this has been complete the state "header received" is false again.
int receive(socket sock, char * buffer, int chunk_size)
{
int offset = 0;
while (chunk_size > 0)
{
// add select() here when you have a non-blocking socket.
int n = recv(sock, buffer+offset, chunk_size);
// TODO: error handling
offset += n;
chunk_size -= n;
}
// return amount of received bytes
return offset;
}
void do_receive(void)
{
struct {
int size;
// other message information
} header;
while (true)
{
receive(sock, &header, sizeof(header);
receive(sock, buffer, header.size);
process_message(buffer, header.size);
}
}
The code above will not pass any compiler. But it shows the idea..