As a test, I'm writing a series of byte arrays to a tcp socket from an Android application, and reading them in a C++ application.
Java
InetAddress address = InetAddress.getByName("192.168.0.2");
Socket socket = new Socket(address, 1300);
DataOutputStream out = new DataOutputStream(socket.getOutputStream())
...
if(count == 0) {
out.write(first, 0, first.length);
} else if(count == 1) {
out.write(second, 0, second.length);
}
C++
do {
iResult = recv(ClientSocket, recvbuf, 3, 0);
for (int i = 0; i < 3; i++) {
std::cout << (int)(signed char)recvbuf[i] << std::endl;
}
} while (iResult > 0);
As it stands, on the first receipt, recv[2] = -52, which I assume to be a junk value, as the output stream has not yet written the second byte array by the time I've received the first segment.
However, when I pause after the the ListenSocket has accepted the connection:
ClientSocket = accept(ListenSocket, NULL, NULL);
std::cin.ignore();
...giving the sender time to do both writes to the stream, recv[2] = 3, which is the first value of the second written byte array.
If I ultimately want to send and receive a constant stream of discrete arrays, how can I determine after I've received the last value of one array, whether the next value in the buffer is the first value of the next array or whether it's a junk value?
I've considered that udp is more suitable for sending a series of discrete data sets, but I need the reliability of tcp. I imagine that tcp is used in this way regularly, but it's not clear to me how to mitigate this issue.
EDIT:
In the actual application for which I'm writing this test, I do implement length prefixing. I don't think that's relevant though; even if I know I'm at the end of a data set, I need to know whether the next value on the buffer is junk or the beginning of the next set.
for (int i = 0; i < 3; i++)
The problem is here. It should be:
for (int i = 0; i < iResult; i++)
You're printing out data that you may not have received. This is the explanation of the 'junk value'.
You can't assume that recv() fills the buffer.
You must also check iResult for both -1 and zero before this loop, and take the appropriate actions, which are different in each case.
As you point out, TCP is stream-based, so there's no built-in way to say "here's a specific chunk of data". What you want to do is add your own "message framing". A simple way to do that is called "length prefixing". Where you first send the size of the data packet, and then the packet itself. Then the receiver will know when they've gotten all the data.
Sending side
send length of packet (as a known size -- say a 32-bit int)
send packet data
Receiving side
read length of packet
read that many bytes of data
process fully-received packet
Check out this article for more information: http://blog.stephencleary.com/2009/04/message-framing.html
Related
I am trying to send data from a vector over a TCP socket.
I'm working with a vector that I fill with values from 0 to 4999, and then send it to the socket.
Client side, I'm receiving the data into a vector, then I copy its data to another vector until I received all the data from the server.
The issue I'm facing is that when I receive my data, sometimes I will get all of it, and sometimes I will only receive the correct data from 0 to 1625 and then I get trash data until the end (please see the image below). I even received for example from 0 to 2600 correct data, then from 2601 to 3500 it's trash and finally from 3501 to 4999 it's correct again.
(left column is line number and right column is the data).
This is the server side :
vector<double> values2;
for(int i=0; i<5000; i++)
values2.push_back(i);
skt.sendmsg(&values2[0], values2.size()*sizeof(double));
The function sendmsg :
void Socket::sendmsg(const void *buf, size_t len){
int bytes=-1;
bytes = send(m_csock, buf, len, MSG_CONFIRM);
cout << "Bytes sent: " << bytes << endl;
}
Client side :
vector<double> final;
vector<double> msgrcvd(4096);
do{
bytes += recv(sock, &msgrcvd[0], msgrcvd.size()*sizeof(double), 0);
cout << "Bytes received: " << bytes << endl;
//Get rid of the trailing zeros
while(!msgrcvd.empty() && msgrcvd[msgrcvd.size() - 1] == 0){
msgrcvd.pop_back();
}
//Insert buffer content into final vector
final.insert(final.end(), msgrcvd.begin(), msgrcvd.end());
}while(bytes < sizeof(double)*5000);
//Write the received data in a txt file
for(int i=0; i<final.size(); i++)
myfile << final[i] << endl;
myfile.close();
The outputs of the bytes are correct, the server outputs 40 000 when sending the data and the client also outputs 40 000 when receiving the data.
Removing the trailing zeros and then inserting the content of the buffer into a new vector is not very efficient, but I don't think it's the issue. If you have any clues on how to make it more efficient, it would be great!
I don't really know if the issue is when I send the data or when I receive it, and also I don't really get why sometimes (rarely), I get all the data.
recv receives bytes, and doesn't necessarily wait for all the data that was sent. So you can be receiving part of a double.
Your code works if you receive complete double values, but will fail when you receive part of a value. You should receive your data in a char buffer, then unpack it into doubles. (Possibly converting endianness if the server and client are different.)
#include <cstring> // For memcpy
std::array<char, 1024> msgbuf;
double d;
char data[sizeof(double)];
int carryover = 0;
do {
int b = recv(sock, &msgbuf[carryover], msgbuf.size() * sizeof(msgbuf[0]) - carryover, 0);
bytes += b;
b += carryover;
const char *mp = &msgbuf[0];
while (b >= sizeof(double)) {
char *bp = data;
for (int i = 0; i < sizeof(double); ++i) {
*bp++ = *mp++;
}
std::memcpy(&d, data, sizeof(double));
final.push_back(d);
b -= sizeof(double);
}
carryover = b % sizeof(double);
// Take care of the extra bytes. Copy them down to the start of the buffer
for (int j = 0; j < carryover; ++j) {
msgbuf[j] = *mp++;
}
} while (bytes < sizeof(double) * 5000);
This uses type punning from What's a proper way of type-punning a float to an int and vice-versa? to convert the received binary data to a double, and assumes the endianness of the client and server are the same.
Incidentally, how does the receiver know how many values it is receiving? You have a mix of hard coded values (5000) and dynamic values (.size()) in your server code.
Note: code not compiled or tested
TL/DR:
Never-ever send raw data via a network socket and expect them properly received/unpacked on other side.
Detailed answer:
Network is built on top of various protocols, and this is for a reason. Once you send something, there is no warranty you counterparty is on the same OS and same software version. There is no standard how primitive types should be coded on byte level. There is no restriction how much intermittent nodes could be involved into the data delivery, and each of your send() may traverse via different routes. So, you have to formalize the way you send the data, then other party can be sure what is proper way to retrieve them from the socket.
Simplest solution: use a header before your data. So, you plan to send 5000 doubles? Then send a DWORD first, which contains 40000 inside (5k elements, 8 bytes each -> 40k) and push all your 5k doubles right after that. Then, your counterparty should read 4 bytes from the socket first, interpret it as DWORD and understand how much bytes should come then.
Next step: you may want to send not only doubles, but ints and strings as well. That way, you have to expand your header so it can indicate
Total size of further data (so called payload size)
Kind of the data (array of doubles, string, single int etc)
Advanced solution:
Take a look on ready-to-go solutions:
ProtoBuf https://developers.google.com/protocol-buffers/docs/cpptutorial
Boost.Serialization https://www.boost.org/doc/libs/1_67_0/libs/serialization/doc/index.html
Apache Thrift https://thrift.apache.org
YAS https://github.com/niXman/yas
Happy coding!
I am using 64-bit Ubuntu 16.04 LTS. Like I said, I am attempting to make a TCP socket connection to another device. The program starts by reading data from the socket to initialize the last_recorded_data variable (as seen below, towards the bottom of myStartProcedure()), and I know that this is working exactly as expected. Then, the rest of the program starts which is driven by callbacks. When I make UPDATE_BUFFER_MS something smaller like 8, it fails after a couple of seconds. A frequency of this value is the desired value, but if I make it larger for testing purposes (something like 500), then it works for a little bit longer, but also eventually fails the same way.
The failure is as follows: The device I'm attempting to read from consistently sends data every 8 milliseconds, and within this packet of data, the first few bytes are reserved for telling the client how large the packet is, in bytes. During normal operation, the received number of bytes and the size as described by these first few bytes are equal. However, the packet received directly before the read() call starts to block is always 24 bytes less than the expected size, but the packet says the data packet sent should still be the expected size. When the next attempt to get the data is made, the read() call blocks and upon timeout sets errno to be EAGAIN (Resource temporarily unavailable).
I tried communicating with this same device with a Python application and it is not experiencing the same issue. Furthermore, I tried this C++ application on another one of these devices and I'm seeing the same behavior, so I think it's a problem on my end. My code (simplified) is below. Please let me know if you see any obvious errors, thank you!!
#include <string>
#include <unistd.h>
#include <iostream>
#include <stdio.h>
#include <errno.h>
#include <sys/socket.h>
#include <stdlib.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#define COMM_DOMAIN AF_INET
#define PORT 8008
#define TIMEOUT_SECS 3
#define TIMEOUT_USECS 0
#define UPDATE_BUFFER_MS 8
#define PACKET_SIZE_BYTES_MAX 1200
//
// Global variables
//
// Socket file descriptor
int socket_conn;
// Tracks the timestamp of the last time data was recorded
// The data packet from the TCP connection is sent every UPDATE_BUFFER_MS milliseconds
unsigned long last_process_cycle_timestamp;
// The most recently heard data, cast to a double
double last_recorded_data;
// The number of bytes expected from a full packet
int full_packet_size;
// The minimum number of bytes needed from the packet, as I don't need all of the data
int min_required_packet_size;
// Helper to cast the packet data to a double
union PacketAsFloat
{
unsigned char byte_values[8];
double decimal_value;
};
// Simple struct to package the data read from the socket
struct SimpleDataStruct
{
// Whether or not the struct was properly populated
bool valid;
// Some data that we're interested in right now
double important_data;
//
// Other, irrelevant members removed for simplicity
//
};
// Procedure to read the next data packet
SimpleDataStruct readCurrentData()
{
SimpleDataStruct data;
data.valid = false;
unsigned char socket_data_buffer[PACKET_SIZE_BYTES_MAX] = {0};
int read_status = read(socket_conn, socket_data_buffer, PACKET_SIZE_BYTES_MAX);
if (read_status < min_required_packet_size)
{
return data;
}
//for (int i = 0; i < read_status - 1; i++)
//{
// std::cout << static_cast<int>(socket_data_buffer[i]) << ", ";
//}
//std::cout << static_cast<int>(socket_data_buffer[read_status - 1]) << std::endl;
PacketAsFloat packet_union;
for (int j = 0; j < 8; j++)
{
packet_union.byte_values[7 - j] = socket_data_buffer[j + 252];
}
data.important_data = packet_union.decimal_value;
data.valid = true;
return data;
}
// This acts as the main entry point
void myStartProcedure(std::string host)
{
//
// Code to determine the value for full_packet_size and min_required_packet_size (because it can vary) was removed
// Simplified version is below
//
full_packet_size = some_known_value;
min_required_packet_size = some_other_known_value;
//
// Create socket connection
//
if ((socket_conn = socket(COMM_DOMAIN, SOCK_STREAM, 0)) < 0)
{
std::cout << "socket_conn heard a bad value..." << std::endl;
return;
}
struct sockaddr_in socket_server_address;
memset(&socket_server_address, '0', sizeof(socket_server_address));
socket_server_address.sin_family = COMM_DOMAIN;
socket_server_address.sin_port = htons(PORT);
// Create and set timeout
struct timeval timeout_chars;
timeout_chars.tv_sec = TIMEOUT_SECS;
timeout_chars.tv_usec = TIMEOUT_USECS;
setsockopt(socket_conn, SOL_SOCKET, SO_RCVTIMEO, (const char*)&timeout_chars, sizeof(timeout_chars));
if (inet_pton(COMM_DOMAIN, host.c_str(), &socket_server_address.sin_addr) <= 0)
{
std::cout << "Invalid address heard..." << std::endl;
return;
}
if (connect(socket_conn, (struct sockaddr *)&socket_server_address, sizeof(socket_server_address)) < 0)
{
std::cout << "Failed to make connection to " << host << ":" << PORT << std::endl;
return;
}
else
{
std::cout << "Successfully brought up socket connection..." << std::endl;
}
// Sleep for half a second to let the networking setup properly
sleepMilli(500); // A sleep function I defined elsewhere
SimpleDataStruct initial = readCurrentData();
if (initial.valid)
{
last_recorded_data = initial.important_data;
}
else
{
// Error handling
return -1;
}
//
// Start the rest of the program, which is driven by callbacks
//
}
void updateRequestCallback()
{
unsigned long now_ns = currentTime(); // A function I defined elsewhere that gets the current system time in nanoseconds
if (now_ns - last_process_cycle_timestamp >= 1000000 * UPDATE_BUFFER_MS)
{
SimpleDataStruct current_data = readCurrentData();
if (current_data.valid)
{
last_recorded_data = current_data.important_data;
last_process_cycle_timestamp = now_ns;
}
else
{
// Error handling
std::cout << "ERROR setting updated data, SimpleDataStruct was invalid." << std:endl;
return;
}
}
}
EDIT #1
I should be receiving a certain number of bytes every time, and I would expect the return value of read() to be returning that value as well. However, I just tried changing the value of PACKET_SIZE_BYTES_MAX to be 2048, and the return value of read() is now 2048, when it should be the size of the packet that the device is sending back (NOT 2048). The Python application is also setting the max to be 2048 and its returning packet size is the correct/expected size...
Try commenting out the timeout setup. I never use that on my end and I don't experience the problem you're talking about.
// Create and set timeout
struct timeval timeout_chars;
timeout_chars.tv_sec = TIMEOUT_SECS;
timeout_chars.tv_usec = TIMEOUT_USECS;
setsockopt(socket_conn, SOL_SOCKET, SO_RCVTIMEO, (const char*)&timeout_chars, sizeof(timeout_chars));
To avoid blocking, you can setup the socket as a non-block socket and then use a select() or poll() to get more data. Both of these functions can use the timeout as presented above. However, with a non-blocking socket you must make sure that the read works as expected. In many cases you will get a partial read and have to wait (select() or poll()) again for more data. So the code would be a bit more complicated.
socket_conn = socket(COMM_DOMAIN, SOCK_STREAM | SOCK_NONBLOCK, 0);
If security is a potential issue, I would also set SOCK_CLOEXEC to prevent a child process from accessing the same socket.
std::vector<struct pollfd> fds;
struct pollfd fd;
fd.fd = socket_conn;
fd.events = POLLIN | POLLPRI | POLLRDHUP; // also POLLOUT for writing
fd.revents = 0; // probably useless... (kernel should clear those)
fds.push_back(fd);
int64_t timeout_chars = TIMEOUT_SECS * 1000 + TIMEOUT_USECS / 1000;
int const r = poll(&fds[0], fds.size(), timeout_chars);
if(r < 0) { ...handle error(s)... }
Another method, assuming the header size is well defined and never changes, is to read the header, then using the header information to read the rest of the data. In that case you can keep the blocking socket without any timeout. From your structures I have no idea what that could be. So... let's first define such a structure:
struct header
{
char sync[4]; // four bytes indicated a synchronization point
uint32_t size; // size of packet
... // some other info
};
I put a "sync" field. In TCP it is often that people will add such a field so if you lose track of your position you can seek to the next sync by reading one byte at a time. Frankly, with TCP, you should never get a transmission error like that. You may lose the connection, but never lose data from the stream (i.e. TCP is like a perfect FIFO over your network.) That being said, if you are working on a mission critical software, a sync and also a checksum would be very welcome.
Next we read() just the header. Now we know of the exact size of this packet, so we can use that specific size and read exactly that many bytes in our packet buffer:
struct header hdr;
read(socket_conn, &hdr, sizeof(hdr));
read(socket_conn, packet, hdr.size /* - sizeof(hdr) */);
Obviously, read() may return an error and the size in the header may be defined in big endian (so you need to swap the bytes on x86 processors). But that should get you going.
Also, if the size found in the header includes the number of bytes in the header, make sure to subtract that amount when reading the rest of the packet.
Also, the following is wrong:
memset(&socket_server_address, '0', sizeof(socket_server_address));
You meant to clear the structure with zeroes, not character zero. Although if it connects that means it probably doesn't matter much. Just use 0 instead of '0'.
Can someone please explain, when exactly the read-function I use to get data from a TCP-socket does return?
I use the code below for reading from a measurement system. This system delivers data with a frequency of 15 Hz. READ_TIMEOUT_MS has a value of 200
Furthermore READ_BUFFER_SIZE has a value of 40000.
All works fine, but what happens is, read() returns 15 times a second with 1349 bytes read.
By reading Pitfall 5 in the following documentation I would have expected, that the buffer is filled up completely:
http://www.ibm.com/developerworks/library/l-sockpit/
Init:
sock=socket(AF_INET, SOCK_STREAM, 0);
if (socket < 0)
{
goto fail0;
}
struct sockaddr_in server;
server.sin_addr.s_addr = inet_addr(IPAddress);
server.sin_family = AF_INET;
server.sin_port = htons(Port);
if (connect(sock,(struct sockaddr *)&server, sizeof(server)))
{
goto fail1;
}
struct timeval tv;
tv.tv_sec = READ_TIMEOUT_MS / 1000;
tv.tv_usec = (READ_TIMEOUT_MS % 1000) * 1000;
if (setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(struct timeval)))
{
goto fail1;
}
return true;
fail1:
close(sock);
sock = -1;
fail0:
return false;
Read:
unsigned char buf[READ_BUFFER_SIZE];
int len = read(sock, buf, sizeof(buf));
if (len <= 0)
{
return NULL;
}
CBinaryDataStream* pData = new CBinaryDataStream(len);
pData->WriteToStream(buf, len);
return pData;
I hope this question is not a duplicate, because I searched for an answer before I asked.
Please let me know if you need some further information.
I suspect that you are using Linux. The manpage for read says:
On success, the number of bytes read is returned (zero indicates end
of file), and the file position is advanced by this number. It is not
an error if this number is smaller than the number of bytes requested;
TCP sockets model a byte-stream and not a block- or message-oriented protocol. Calling read on a socket returns if there are any data available in the application's buffer. In principle, the data arrives in the network card, is then transferred to the kernel space where it is processed by the kernel and the network stack. Finally, the read syscall gets the data from the kernel space and transfers it to user space.
When reading from a socket you have to expect an arbitrary number of bytes that can be read. A call to read returns as soon as there is anything in the read buffer or when an error occurred. You cannot predict or assume how many bytes may be available.
In addition, the call can return without reading anything because the OS has been interrupted. This happens quite often when debug or profile your application. You have to handle this in your application layer.
The complete receiver path is surprisingly complex when you want to have high data rates or low latency. The kernel and NICs implement many optimizations to e.g. spread load over cores, increase locality and offload processing to the NIC. Here are some additional links you may find interesting:
https://www.lmax.com/blog/staff-blogs/2016/05/06/navigating-linux-kernel-network-stack-receive-path/
https://blog.cloudflare.com/how-to-achieve-low-latency/
http://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data
http://syuu.dokukino.com/2013/05/linux-kernel-features-for-high-speed.html
So this is the first time I'm actually asking a question in here, although I have been using this site for ages!
My problem is a bit tricky. I'm trying to develop a client server application for sending large files, using UDP with my own error checking and flow control. Now, I've developed a fully-functioning server and client. Client requests for a specific file, server starts sending. The file is read in parts into a buffer to avoid having to read small bits of the file every time a packet is send, thus saving processing time. Packets consist of 1400 bytes of actual data + a header of 28 bytes (sequence numbers, ack numbers, checksum etc..).
So I had the basics down, a simple stop-and-wait protocol. Send packet and receive ack, before sending next packet.
To be able to implement a smarter flow control algorithm, for starters with just some windowing, I have to run the sending-part and receiving-ack part in two different threads. Now here's where I got into problems. This is my first time working with threads, so please bear with me.
My problem is that the file written from the packets on the client side is corrupt. Well, when testing with a small jpg file, the file is only corrupt 50% of the times, when testing with a MP4 file, it's always corrupt! So I guess maybe the thread somehow rearranges the order in which the packets are send? I use sequence numbers, so the problem must occur before assigning the sequence number to the packets...
I know for sure that the part where I split up the file is correct, and also where I reassemble it on the client side, since I have tested this before trying to implement the threading. It should also be noted that I copied the exact sending-part of the code into the sending-thread, and this also worked perfectly before putting it into a thread.. This is also why I'm just posting the threading-part of my code, since this is clearly what is creating the problem (and since the entire code of the project would take up a loooot of space)
My sending thread code:
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t condition_var = PTHREAD_COND_INITIALIZER;
static void *send_thread(void *){
if (file.is_open()) {
while(!file.reachedEnd()){
pthread_mutex_lock(& mutex);
if(seq <= upperwindow) {
int blocksize = file.getNextBlocksize();
senddata = new unsigned char[blocksize + 28];
Packet to_send;
to_send.data = new char[blocksize];
to_send.sequenceNumber = seq;
to_send.ackNumber = 0;
to_send.type = 55; // DATA
file.readBlock(*to_send.data);
createPacket(senddata, to_send, blocksize + 28);
if (server.sendToClient(reinterpret_cast<char*>(senddata), blocksize + 28) == -1)
perror("sending failed");
incrementSequenceNumber(seq);
/* free memory */
delete [] to_send.data;
delete [] senddata;
}
pthread_mutex_unlock(& mutex);
}
pthread_exit(NULL);
} else {
perror("file opening failed!");
pthread_exit(NULL);
}
}
My receiving ack thread code:
static void *wait_for_ack_thread(void *){
while(!file.reachedEnd()){
Packet ack;
if (server.receiveFromClient(reinterpret_cast<char*>(receivedata), 28) == -1) {
perror("error receiving ack");
} else {
getPacket(receivedata, ack, 28);
pthread_mutex_lock(& mutex);
incrementSequenceNumber(upperwindow);
pthread_mutex_unlock(& mutex)
}
}
pthread_exit(NULL);
}
All comments are very much appreciated! :)
EDIT:
Added code of the readBlock function:
void readBlock(char & in){
memcpy(& in, buffer + block_position, blocksize);
block_position = block_position + blocksize;
if(block_position == buffersize){
buf_position ++;
if(buf_position == buf_reads){
buffersize = filesize % buffersize;
}
fillBuffer();
block_position = 0;
}
if(blocksize < MAX_DATA_SIZE){
reached_end = true;
return;
}
if((buffersize - block_position) < MAX_DATA_SIZE){
blocksize = buffersize % blocksize;
}
}
Create an array that represents the status of the communication.
0 means unsent, or sent and receiver reported error. 1 means sending. 2 means sent, and ack gotten.
Allocate this array, and guard access to it with a mutex.
The sending thread keeps two pointers into the array -- "has been sent up to" and "should sent next". These are owned by the sending thread.
The ack thread simply gets ack packets, locks the array, and does the transition on the state.
The sending thread locks the array, checks if it can advance the "has been sent up to" pointer (or if it should resend old stuff). If it notices an error, it reduces the "should be sent next" pointer to point at it.
It then sees if it should send stuff next. If it should, it marks the node as "being sent", unlocks the array, and sends it.
If the sending thread did no work, and found nothing to do, it goes to sleep on a timeout, and possibly a "kick awake" by the ack thread.
Now, note that the client can get the packets sent by this in the wrong order, unless you limit it to having 1 packet in transit.
The connection status array does not have to be a literal array, but it is easier if you start with that and optimize later.
On the receiving end, you have to pay attention to the sequence number, as the packets can get there out of sequence. To test this, write a server that sends the packets in the wrong order on purpose, and ensure that the client manages to stitch it together properly.
I'm sending large data (well… 1Mb) via socket, but I don’t know why the send action is blocking the program and never ends. Small sends runs perfectly and I’m couldn’t found where is the problem here. Can anyone help me, please?
Thank you in advance for any help you can provide.
int liResult = 1;
int liConnection = 0;
int liSenderOption = 1;
struct addrinfo laiSenderAddrInfo;
struct addrinfo *laiResultSenderAddrInfo;
memset(&laiSenderAddrInfo,0,sizeof(laiSenderAddrInfo));
laiSenderAddrInfo.ai_socktype = SOCK_STREAM;
laiSenderAddrInfo.ai_flags = AI_PASSIVE;
liResult = getaddrinfo(_sIp.c_str(), _sPort.c_str(), &laiSenderAddrInfo, &laiResultSenderAddrInfo);
if (liResult > -1)
{
liConnection = socket(laiResultSenderAddrInfo->ai_family, SOCK_STREAM, laiResultSenderAddrInfo->ai_protocol);
liResult = liConnection;
if (liConnection > -1)
{
setsockopt(liConnection, SOL_SOCKET, SO_REUSEADDR, &liSenderOption, sizeof(liSenderOption));
liResult = connect(liConnection, laiResultSenderAddrInfo->ai_addr, laiResultSenderAddrInfo->ai_addrlen);
}
}
size_t lBufferSize = psText->length();
long lBytesSent = 1;
unsigned long lSummedBytesSent = 0;
while (lSummedBytesSent < lBufferSize and lBytesSent > 0)
{
lBytesSent = send(liConnection, psText->c_str() + lSummedBytesSent, lBufferSize - lSummedBytesSent, MSG_NOSIGNAL);
if (lBytesSent > 0)
{
lSummedBytesSent += lBytesSent;
}
}
Check the buffer size, you can do so by following this answer
How to find the socket buffer size of linux
In my case, the values are
Minimum = 4096 bytes ~ 4KB
Default = 16384 bytes ~ 16 KB
Maximum = 4022272 bytes ~ 3.835 MB
You can tweak the values net.core.rmem_max and net.core.wmem_max in /etc/sysctl.conf to increase the socket buffer size and reload with sysctl -p.
Source: http://www.runningunix.com/2008/02/increasing-socket-buffer-size-in-linux/
The send() call blocks until all of the data has been sent or buffered. If the program at the other end of the socket isn't reading and thus there is no flow of data, the write buffer at your end will fill up and send() will block. Chances are that when you tried to send a smaller amount of data it fit into the buffer.
See also this answer.
For TCP, the kernel has a fixed size buffer in which is stores unsent data. The size of this buffer is the current window size of the TCP session. Once this buffer is full any new send will be failed. This is a TCP flow control mechanism which prevents you from trying to send data faster than the receiver can consume the data while at the same time providing an automatic resend for lost data. The default window can be as small as 64K but can grow larger for high latency high bandwidth networks.
What you probably need to do is break the data up into smaller send blocks and then ensure you have a flow-off mechanism for when your send buffer is full.