I am currently working on a project that uses the network. I have to send a struct
struct Header
{
uint32_t magic;
uint32_t checksum;
uint32_t timestamp;
uint16_t commandId;
uint16_t dataSize;
};
struct Packet
{
struct Header header;
char data[128];
};
I'm trying to send the struct Packet from one socket to another using TCP. I've tried to send my struct like that
send(socket, &my_struct, sizeof(my_struct), 0);
but it is not working so I've tried to serialize my struct into a char*
unsigned char *Serialization::serialize_uint32(unsigned char *buffer, uint32_t arg)
{
buffer[3] = (arg >> 24);
buffer[2] = (arg >> 16);
buffer[1] = (arg >> 8);
buffer[0] = (arg);
return (buffer + sizeof(uint32_t));
}
unsigned char *Serialization::serialize_uint16(unsigned char *buffer, uint16_t arg)
{
buffer[1] = (arg >> 8);
buffer[0] = (arg);
return (buffer + sizeof(uint16_t));
}
unsigned char *Serialization::deserialize_uint32(unsigned char *buffer, uint32_t *arg)
{
memcpy((char*)arg, buffer, sizeof(uint32_t));
return (buffer + sizeof(uint32_t));
}
unsigned char *Serialization::deserialize_uint16(unsigned char *buffer, uint16_t *arg)
{
memcpy((char*)arg, buffer, sizeof(uint16_t));
return (buffer + sizeof(uint16_t));
}
even when a client symply send a struct Header data is corrupt when I read it server side
Why is the data corrupt ?
Client sending loop
TcpSocket tcp;
Packet p;
std::stringstream ss;
int cpt = 0;
int ret = 0;
char *serialized;
tcp.connectSocket("127.0.0.1", 4242);
while (getchar())
{
ss.str("");
ss.clear();
ss << cpt++;
p.header.magic = 0;
p.header.checksum = 1;
p.header.timestamp = 2;
p.header.commandId = 3;
p.header.dataSize = ss.str().length();
memset(p.data, 0, 128);
memcpy(p.data, ss.str().c_str(), ss.str().length());
serialized = new char[sizeof(Header) + ss.str().length()];
bzero(serialized, sizeof(Header) + ss.str().length());
Serialization::serialize_packet(serialized, p);
hexDump("serialized", serialized+1, sizeof(Header) + ss.str().length());
ret = tcp.write(serialized+1, sizeof(Header) + ss.str().length());
}
server recv loop: (fonction call by select() )
buff = new char[bav];
socket->read(buff, bav);
hexdump("buff", buff, bav);
socket->read() :
int TcpSocket::read(char *buff, int len)
{
int ret;
ret = recv(this->_socket, buff, len, 0);
return (ret);
}
when I run those programs :
./server
[Server] new connexion :: [5]
recv returns : 17
buff serialized:
0000 00 00 00 00 14 00 00 00 1c 00 00 00 1a 00 00 00 ................
0010 1b
./client
serialized data:
0000 00 00 00 00 00 00 01 00 00 00 02 00 03 00 01 30 ...............0
0010 00
send returns : 17
So, this is wrong, and it will definitely cause errors.
buff = new char[bav];
socket->read(buff, bav);
hexdump("buff", buff, bav);
socket->read() :
int TcpSocket::read(char *buff, int len)
{
return recv(this->_socket, buff, len, 0);
}
The return value from recv() must not be ignored.
From man 2 recv:
RETURN VALUES
These calls return the number of bytes received, or -1 if an error
occurred.
For TCP sockets, the return value 0 means the peer has closed its half
side of the connection.
So, how many bytes did you receive? It's impossible to tell, if you discard the result from recv(). Maybe recv() failed, you'd never find out if you don't check the return value. Maybe it only filled up part of your buffer. You have to check the return code from recv(). This is the number one error people make when writing programs that use TCP.
You will need to alter your code to handle the following cases:
The recv() call may completely fill your buffer.
The recv() call may partially fill your buffer.
The recv() call may return 0, indicating that the sender has shut down the connection.
The recv() call may indicate EINTR because it was interrupted by a system call.
The recv() call may indicate ECONNRESET because the sender has closed the connection suddenly or has disappeared.
The recv() call may encounter some other error.
Remember: when using TCP, just because you send() 16 bytes doesn't mean that the other peer will recv() 16 bytes — it may be broken up into chunks. TCP is a stream protocol. Unlike UDP, adjacent chunks of data can be arbitrarily joined or split.
You need to mask only the low 8 bits each time:
buffer[3] = (arg >> 24) & 0xff;
buffer[2] = (arg >> 16) & 0xff;
buffer[1] = (arg >> 8) & 0xff;
buffer[0] = (arg) & 0xff;
Do the same when you deserialize
If I were you I would not reinvent the wheel. There are a lot of well documented and tested libraries / protocols out there for exactly the purpose you are looking for. A small list which just comes to my mind:
boost serialization
Google Protocol Buffers
JSON
XML
CORBA
Related
I am reading from a binary file.
char bigbuf[5000];
while (read(fd, bigbuf, 2) != 0) {
uint16_t packetSize = htons(*(uint16_t *)bigbuf);
read(fd, bigbuf + 2, packetSize - 2);
myParser.onUDPPacket(bigbuf, packetSize);
}
The packets are written in binary file are of 40 bytes, but inside onUDPPacket function I receive a packet of 61 bytes then on the second call I again receive a 60 byte packet. Now I have to write the function onUDPPacket such that on the first call it processes 40 byte of data from the 61 byte received and it has to append remaining 21 bytes in the starting of the next 60 byte of packet. How do I do this append thing ?
void Parser::onUDPPacket(const char *buf, size_t len)
{
}
I am trying to perform AES decryption using the crypto++ library. I have an encrypted file whose first 8 bytes are the filelength, subsequent 16 bytes are the initialization vector, and the remaining data is the data of interest. I also have a string representation of my key (which I hash using SHA256)
I get the following error when trying to perform AES decryption:
StreamTransformationFilter: invalid PKCS #7 block padding found
I am using the following c++ code:
std::string keyStr = "my_key";
std::string infilePath = "my/file/path";
CryptoPP::SHA256 hash;
unsigned char digest[CryptoPP::SHA256::DIGESTSIZE];
hash.CalculateDigest( digest, reinterpret_cast<const unsigned char*>(&keyStr[0]), keyStr.length() );
auto key = CryptoPP::SecByteBlock(digest, CryptoPP::SHA256::DIGESTSIZE);
std::ifstream fin(infilePath, std::ifstream::binary);
// First 8 bytes is the file size
std::vector<char> fileSizeVec(8);
fin.read(fileSizeVec.data(), fileSizeVec.size());
// Read the next 16 bytes to get the initialization vector
std::vector<char> ivBuffer(16);
fin.read(ivBuffer.data(), ivBuffer.size());
CryptoPP::SecByteBlock iv(reinterpret_cast<const unsigned char*>(ivBuffer.data()), ivBuffer.size());
// Create a CBC decryptor
CryptoPP::CBC_Mode<CryptoPP::AES>::Decryption decryption;
decryption.SetKeyWithIV(key, sizeof(key), iv);
CryptoPP::StreamTransformationFilter decryptor(decryption);
std::vector<char> buffer(CHUNK_SIZE, 0);
while(fin.read(buffer.data(), buffer.size())) {
CryptoPP::SecByteBlock tmp(reinterpret_cast<const unsigned char*>(buffer.data()), buffer.size());
decryptor.Put(tmp, tmp.size());
}
decryptor.MessageEnd();
size_t retSize = decryptor.MaxRetrievable();
std::vector<char> decryptedBuff;
decryptedBuff.resize(retSize);
decryptor.Get(reinterpret_cast<CryptoPP::byte*>(decryptedBuff.data()), decryptedBuff.size());
I am not sure what is giving me the error. I am working off the following python code. When I run the python code with the same input file, it successfully decrypts the file.
def decrypt_file(in_filename, out_filename=None):
key = hashlib.sha256(PASSWORD).digest()
"""loads and returns the embedded model"""
chunksize = 24 * 1024
if not out_filename:
out_filename = os.path.splitext(in_filename)[0]
with open(in_filename, 'rb') as infile:
# get the initial 8 bytes with file size
tmp = infile.read(8)
iv = infile.read(16)
decryptor = AES.new(key, AES.MODE_CBC, iv)
string = b''
# with open(out_filename, 'wb') as outfile:
while True:
chunk = infile.read(chunksize)
if len(chunk) == 0:
break
string += decryptor.decrypt(chunk)
return string
In addition to solving the error, I would also love some general c++ coding feedback on how I can improve.
Thanks in advance!
Edit:
It looks like I wasn't reading the input file all the way to the end (as the length of the last chunk is smaller than CHUNK_SIZE). The following code now reads the entire file, however I still get the same issue. I have also confirmed that the IV and key match exactly that produced from the python code.
// Get the length of the file in bytes
fin.seekg (0, fin.end);
size_t fileLen = fin.tellg();
fin.seekg (0, fin.beg);
std::vector<char> buffer(CHUNK_SIZE, 0);
size_t readSize = CHUNK_SIZE;
while(fin.read(buffer.data(), readSize)) {
CryptoPP::SecByteBlock tmp(reinterpret_cast<const unsigned char*>(buffer.data()), CHUNK_SIZE);
decryptor.Put(tmp, tmp.size());
std::fill(buffer.begin(), buffer.end(), 0);
size_t bytesReamining = fileLen - fin.tellg();
readSize = CHUNK_SIZE < bytesReamining ? CHUNK_SIZE : bytesReamining;
if (!readSize)
break;
}
}
Note that I have tried this line as both CryptoPP::SecByteBlock tmp(reinterpret_cast<const unsigned char*>(buffer.data()), CHUNK_SIZE);
and CryptoPP::SecByteBlock tmp(reinterpret_cast<const unsigned char*>(buffer.data()), readSize); (Using CHUNK_SIZE pads with 0)
I have an encrypted file whose first 8 bytes are the filelength, subsequent 16 bytes are the initialization vector, and the remaining data is the data of interest...
I think I'll just cut to the chase and show you an easier way to do things with the Crypto++ library. The key and iv are hard-coded to simplify the code. The derivation is not needed for the example. By the way, if Python has it, you should consider using HKDF for derivation of the AES key and iv. HKDF has provable security properties.
Crypto++ handles the chunking for you. You don't need to explicitly perform it; see Pumping Data on the Crypto++ wiki.
I believe the Python code has a potential padding oracle present due to the use of CBC mode without a MAC. You might consider adding a MAC or using an Authenticated Encryption mode of operation.
#include "cryptlib.h"
#include "filters.h"
#include "osrng.h"
#include "modes.h"
#include "files.h"
#include "aes.h"
#include "hex.h"
#include <string>
#include <iostream>
const std::string infilePath = "test.dat";
int main(int argc, char* argv[])
{
using namespace CryptoPP;
const byte key[16] = {
1,2,3,4, 1,2,3,4, 1,2,3,4, 1,2,3,4
};
const byte iv[16] = {
8,7,6,5, 8,7,6,5, 8,7,6,5, 8,7,6,5
};
const byte data[] = // 70 characters
"Now is the time for all good men to come to the aide of their country.";
HexEncoder encoder(new FileSink(std::cout));
std::string message;
// Show parameters
{
std::cout << "Key: ";
StringSource(key, 16, true, new Redirector(encoder));
std::cout << std::endl;
std::cout << "IV: ";
StringSource(iv, 16, true, new Redirector(encoder));
std::cout << std::endl;
std::cout << "Data: ";
StringSource(data, 70, true, new Redirector(encoder));
std::cout << std::endl;
}
// Write sample data
{
FileSink outFile(infilePath.c_str());
word64 length = 8+16+70;
outFile.PutWord64(length, BIG_ENDIAN_ORDER);
outFile.Put(iv, 16);
CBC_Mode<AES>::Encryption enc;
enc.SetKeyWithIV(key, 16, iv, 16);
StringSource(data, 70, true, new StreamTransformationFilter(enc, new Redirector(outFile)));
}
// Read sample data
{
FileSource inFile(infilePath.c_str(), true /*pumpAll*/);
word64 read, l;
read = inFile.GetWord64(l, BIG_ENDIAN_ORDER);
if (read != 8)
throw std::runtime_error("Failed to read length");
SecByteBlock v(16);
read = inFile.Get(v, 16);
if (read != 16)
throw std::runtime_error("Failed to read iv");
CBC_Mode<AES>::Decryption dec;
dec.SetKeyWithIV(key, 16, v, 16);
SecByteBlock d(l-8-16);
StreamTransformationFilter f(dec, new ArraySink(d, d.size()));
inFile.CopyTo(f);
f.MessageEnd();
std::cout << "Key: ";
StringSource(key, 16, true, new Redirector(encoder));
std::cout << std::endl;
std::cout << "IV: ";
StringSource(v, 16, true, new Redirector(encoder));
std::cout << std::endl;
std::cout << "Data: ";
StringSource(d, d.size(), true, new Redirector(encoder));
std::cout << std::endl;
message.assign(reinterpret_cast<const char*>(d.data()), d.size());
}
std::cout << "Message: ";
std::cout << message << std::endl;
return 0;
}
Running the program results in:
$ g++ test.cxx ./libcryptopp.a -o test.exe
$ ./test.exe
Key: 01020304010203040102030401020304
IV: 08070605080706050807060508070605
Data: 4E6F77206973207468652074696D6520666F7220616C6C20676F6F64206D656E20746F2063
6F6D6520746F207468652061696465206F6620746865697220636F756E7472792E
Key: 01020304010203040102030401020304
IV: 08070605080706050807060508070605
Data: 4E6F77206973207468652074696D6520666F7220616C6C20676F6F64206D656E20746F2063
6F6D6520746F207468652061696465206F6620746865697220636F756E7472792E
Message: Now is the time for all good men to come to the aide of their country.
Prior to this Stack Overflow question, the Crypto++ library did not provide PutWord64 and GetWord64. Interop with libraries like Python is important to the project, so they were added at Commit 6d69043403a9 and Commit 8260dd1e81c3. They will be part of the Crypto++ 8.3 release.
If you are working with Crypto++ 8.2 or below, you can perform the 64-bit read with the following code.
word64 length;
word32 h, l;
inFile.GetWord32(h, BIG_ENDIAN_ORDER);
inFile.GetWord32(l, BIG_ENDIAN_ORDER);
length = ((word64)h << 32) | l;
Here is the data file used for this example.
$ hexdump -C test.dat
00000000 00 00 00 00 00 00 00 5e 08 07 06 05 08 07 06 05 |.......^........|
00000010 08 07 06 05 08 07 06 05 b0 82 79 ee a6 d8 8a 0e |..........y.....|
00000020 a6 b3 a4 7e 63 bd 9a bc 0e e4 b6 be 3e eb 36 64 |...~c.......>.6d|
00000030 72 cd ba 91 8d e0 d3 c5 cd 64 ae c0 51 de a7 c9 |r........d..Q...|
00000040 1e a8 81 6d c0 d5 42 2a 17 5a 19 62 1e 9c ab fd |...m..B*.Z.b....|
00000050 21 3d b0 8f e2 b3 7a d4 08 8d ec 00 e0 1e 5e 78 |!=....z.......^x|
00000060 56 6d f5 3e 8c 5f fe 54 |Vm.>._.T|
Looks like the issue had to do with padding. I instead switched to using a StringSource, which only worked once I specified CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme::ZEROS_PADDING as an argument for StreamTransformationFilter
Here is the working code for anyone that is interested:
void Crypto::decryptFileAES(CryptoPP::SecByteBlock key, std::string infilePath) {
std::ifstream fin(infilePath, std::ifstream::binary);
// Get the length of the file in bytes
fin.seekg (0, fin.end);
size_t fileLen = fin.tellg();
fin.seekg (0, fin.beg);
// First 8 bytes is the file size
std::vector<char> fileSizeVec(8);
fin.read(fileSizeVec.data(), fileSizeVec.size());
// Read the first 16 bytes to get the initialization vector
std::vector<char> ivBuffer(16);
fin.read(ivBuffer.data(), ivBuffer.size());
CryptoPP::SecByteBlock iv(reinterpret_cast<const unsigned char*>(ivBuffer.data()), ivBuffer.size());
// Create a CBC decryptor
CryptoPP::CBC_Mode<CryptoPP::AES>::Decryption decryption;
decryption.SetKeyWithIV(key, sizeof(key), iv);
size_t bytesReamining = fileLen - fin.tellg();
std::vector<char> buffer(bytesReamining);
if(!fin.read(buffer.data(), bytesReamining)) {
throw std::runtime_error("Unable to read file");
}
std::string decryptedText;
CryptoPP::StringSource ss(reinterpret_cast<const unsigned char*>(buffer.data()), buffer.size(), true,
new CryptoPP::StreamTransformationFilter(decryption,
new CryptoPP::StringSink(decryptedText), CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme::ZEROS_PADDING));
std::cout << decryptedText << std::endl;
}
Can anyone tell me how to send hexadecimal values stored in array unchanged to client??
whenever I send a char array of hexadecimal to client via boost server, its converting it to ASCII/JUNK(Can't decide what it is).
for Ex:
I am trying to send
"24 bb ff 0f 02 08 01 e0 01 e0 02 08 0f 2d 0f 00 23 61"
in char array via Boost asio server.
Edit:
Client is receiving
"32 34 62 62 66 66 30 66 30 32 30 38 30 31 65 30 30 31 65 30 30 32 30 38 30 66 32 64 30 66 30 30 32 33 36 31"
this is the piece of code I am using.
char Sendingdata_[512];
string finalHex = "24bbff0f020801e001e002080f2d0f002361";
strcpy(Sendingdata_, finalHex.c_str());
boost::asio::async_write(socket_, boost::asio::buffer(Sendingdata_,bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error));
should I use different buffers or any other way to send hexadecimal values???
If the code is attempting to send more than 37 bytes, then it will be sending uninitialized memory. If it is attempting to send more than 512 bytes, then it is reading beyond the end of the buffer. In either case, memory trash patterns may be sent.
The Sendingdata_ buffer is 512 bytes, but only 37 of those bytes have been initialized.
char Sendingdata_[512]; // 512 unitialized values.
std::string finalHex = string-literal; // 36 ASCII characters + null termination.
strcpy(Sendingdata_, finalHex.c_str()); // 37 characters copied
boost::asio::async_write(..., boost::asio::buffer(Sendingdata_, bytes_transferred), ...);
The finalHex string is being provided a string literal. For example, assigning a string the string-literial of "2400bb", will store the '2', '4', '0', '0', 'b', and 'b' ASCII characters.
std::string ascii = "2400bb";
assert(ascii.length() == 6);
assert('2' == ascii[0]);
assert('4' == ascii[1]);
assert('0' == ascii[2]);
assert('0' == ascii[3]);
assert('b' == ascii[4]);
assert('b' == ascii[5]);
Consider using a vector, providing the numeric value in hex notation:
std::vector<unsigned char> hex = { 0x24, 0x00, 0xbb };
assert(hex.size() == 3);
assert(0x24 == hex[0]);
assert(0x00 == hex[1]);
assert(0xbb == hex[2]);
Alternatively, one could use std::string by providing the \x control character to indicate that the subsequent value is hex. However, one may need to perform explicit casting when interpreting the values, and use constructors that handle the null character within the string:
std::string hex("\x24\x00\xbb", 3);
// alternatively: std::string hex{ 0x24, 0x00, static_cast<char>(0xbb) };
assert(hex.size() == 3);
assert(0x24 == static_cast<unsigned char>(hex[0]));
assert(0x00 == static_cast<unsigned char>(hex[1]));
assert(0xbb == static_cast<unsigned char>(hex[2]));
Here is an example demonstrating the differences and Asio buffer usage:
#include <cassert>
#include <functional>
#include <iostream>
#include <string>
#include <vector>
#include <boost/asio.hpp>
int main()
{
// String-literial.
std::string ascii = "2400bb";
assert(ascii.length() == 6);
assert('2' == ascii[0]);
assert('4' == ascii[1]);
assert('0' == ascii[2]);
assert('0' == ascii[3]);
assert('b' == ascii[4]);
assert('b' == ascii[5]);
// Verify asio buffers.
auto ascii_buffer = boost::asio::buffer(ascii);
assert(ascii.length() == boost::asio::buffer_size(ascii_buffer));
assert(std::equal(
boost::asio::buffers_begin(ascii_buffer),
boost::asio::buffers_end(ascii_buffer),
std::begin(ascii)));
// Hex values.
std::vector<unsigned char> hex = { 0x24, 0x00, 0xbb };
// alternatively: unsigned char hex[] = { 0x24, 0x00, 0xbb };
assert(hex.size() == 3);
assert(0x24 == hex[0]);
assert(0x00 == hex[1]);
assert(0xbb == hex[2]);
// Verify asio buffers.
auto hex_buffer = boost::asio::buffer(hex);
assert(hex.size() == boost::asio::buffer_size(hex_buffer));
assert(std::equal(
boost::asio::buffers_begin(hex_buffer),
boost::asio::buffers_end(hex_buffer),
std::begin(hex),
std::equal_to<unsigned char>()));
// String with hex. As 0x00 is in the string, the string(char*) constructor
// cannot be used.
std::string hex2("\x24\x00\xbb", 3);
// alternatively: std::string hex2{ 0x24, 0x00, static_cast<char>(0xbb) };
assert(hex2.size() == 3);
assert(0x24 == static_cast<unsigned char>(hex2[0]));
assert(0x00 == static_cast<unsigned char>(hex2[1]));
assert(0xbb == static_cast<unsigned char>(hex2[2]));
}
Because you're sending a wellknown memory trash pattern (often used: 0xDEADBEEF, 0xBAADF00D, etc.) I'd assume you're reading past the end of a buffer, or perhaps you're dereferencing a stale pointer.
One of the common errors I see people make with ASIO is this:
void foo() {
std::string packet = "hello world";
boost::asio::async_write(socket_, asio::buffer(packet), my_callback);
}
The problem is using a stacklocal buffer with an asynchronous call. async_write will return immediately, foo will return. packet is probably gone before the asynchronous operation accesses it.
This is one of the reasons that could lead to you reading the trash pattern instead of your buffer contents, if you're running a debug heap library.
I was trying to send some length prefix data to the server, i have tried harder by using the code and solution posted by other people on stack overflow. Still looking how things are actually working using TCP.As i don't know so much(about network programming but i am aware about theoretical concepts). i am writing what i have tried so far.Based on that i have some questions.
As we are using Char Buffer[200] = "This is data" on client side to send(using send() function) string and char type data to server(receiving using recv()function). Till here its okay but what if i need to send some variable length message with their length information ? , How i can encode length info to the message ?
for example: 0C 54 68 69 73 20 69 73 20 64 61 74 61 07 46 72 6f 6d 20 6d 65
(length) T h i s i s d a t a (length) F r o m m e
How can i interpret these two message seperately from tcp stream at the sever side ?
i dont know how length information can be sent separetly ?. Or if someone can understand my test case(given below in edit) to verify length information of the string.
EDIT: It seems okay but i just need verify a bit about prefix length. I am sending 20 bytes("This is data From me") to server. My recevied length size is 4 bytes(i dont know whats inside, i need to verify does the 4 bytes of length i received contain 0000 0000 0000 0000 0000 00000 0001 0100). Thats it, so the way i thought to verify it by shifting length info by 2 bits right now it should look like ( 4 bytes of length i received contain 0000 0000 0000 0000 0000 00000 0000 0101) in this case i should get only 5 characters i.e "This ". Do you know how can i verify this at server side ?
Client Code
int bytesSent;
int bytesRecv = SOCKET_ERROR;
char sendbuf[200] = "This is data From me";
int nBytes = 200, nLeft, idx;
nLeft = nBytes;
idx = 0;
uint32_t varSize = strlen (sendbuf);
bytesSent = send(ConnectSocket,(char*)&varSize, 4, 0);
assert (bytesSent == sizeof (uint32_t));
std::cout<<"length information is in:"<<bytesSent<<"bytes"<<std::endl;
// code to make sure all data has been sent
while (nLeft > 0)
{
bytesSent = send(ConnectSocket, &sendbuf[idx], nLeft, 0);
if (bytesSent == SOCKET_ERROR)
{
std::cerr<<"send() error: " << WSAGetLastError() <<std::endl;
break;
}
nLeft -= bytesSent;
idx += bytesSent;
}
bytesSent = send(ConnectSocket, sendbuf, strlen(sendbuf), 0);
printf("Client: Bytes sent: %ld\n", bytesSent);
Server Code
uint32_t nlength;
int length_received = recv(m_socket,(char*)&nlength, 4, 0);
char *recvbuf = new char[nlength];
int byte_recived = recv(m_socket, recvbuf, nlength, 0);
Thanks
If you need to send variable-length data, you need to send the length of that data before the data itself.
In you code snippet above, you appear to be doing the opposite:
while (nLeft > 0)
{
bytesSent = send(ConnectSocket, &sendbuf[idx], nLeft, 0);
// [...]
}
bytesSent = send(ConnectSocket, sendbuf, strlen(sendbuf), 0);
Here you send the string first, and then the length. How is the client going to be able to interpret that? By the time the get the length, they have already pulled the string off the socket.
Instead, send the length first, and make sure you're explicit about the size of the size field:
const uint32_t varSize = strlen (sendbuf);
bytesSent = send(ConnectSocket, &varSize, sizeof (varSize), 0);
assert (bytesSent == sizeof (uint32_t));
while (nLeft > 0)
{
bytesSent = send(ConnectSocket, &sendbuf[idx], nLeft, 0);
// [...]
}
You might also consider not sending variable-length data at all. Fixed-width binary protocols are easier to parse on the receiving side in general. You could always send string data in a fixed-width field (say, 20 chars) and pad it out with spaces or \0's. This does waste some space on the wire, at least in theory. If you are smart about the sizes of the fixed-width fields and what you send in them, you can be economical with this space in many cases.
Hi everyone i have an issue while reading binary data from a binary file as following:
File Content:
D3 EE EE 00 00 01 D7 C4 D9 40
char * afpContentBlock = new char[10];
ifstream inputStream(sInputFile, ios::in|ios::binary);
if (inputStream.is_open()))
{
inputStream.read(afpContentBlock, 10);
int n = sizeof(afpContentBlock)/sizeof(afpContentBlock[0]); // Print 4
// Here i would like to check every byte, but no matter how i convert the
// char[] afpContentBlock, it always cut at first byte 0x00.
}
I know this happens cause of the byte 0x00. Is there a way to manage it somehow ?
I have tried to write it with an ofstream object, and it works fine since it writes out the whole 10 bytes. Anyway i would like to loop through the whole byte array to check bytes value.
Thank you very much.
It's much easier to just get how many bytes you read from the ifstream like so:
if (inputStream.is_open()))
{
inputStream.read(afpContentBlock, 10);
int bytesRead = (int)inputStream.gcount();
for( int i = 0; i < bytesRead; i++ )
{
// check each byte however you want
// access with afpContentBlock[i]
}
}