I'm sending large data (well… 1Mb) via socket, but I don’t know why the send action is blocking the program and never ends. Small sends runs perfectly and I’m couldn’t found where is the problem here. Can anyone help me, please?
Thank you in advance for any help you can provide.
int liResult = 1;
int liConnection = 0;
int liSenderOption = 1;
struct addrinfo laiSenderAddrInfo;
struct addrinfo *laiResultSenderAddrInfo;
memset(&laiSenderAddrInfo,0,sizeof(laiSenderAddrInfo));
laiSenderAddrInfo.ai_socktype = SOCK_STREAM;
laiSenderAddrInfo.ai_flags = AI_PASSIVE;
liResult = getaddrinfo(_sIp.c_str(), _sPort.c_str(), &laiSenderAddrInfo, &laiResultSenderAddrInfo);
if (liResult > -1)
{
liConnection = socket(laiResultSenderAddrInfo->ai_family, SOCK_STREAM, laiResultSenderAddrInfo->ai_protocol);
liResult = liConnection;
if (liConnection > -1)
{
setsockopt(liConnection, SOL_SOCKET, SO_REUSEADDR, &liSenderOption, sizeof(liSenderOption));
liResult = connect(liConnection, laiResultSenderAddrInfo->ai_addr, laiResultSenderAddrInfo->ai_addrlen);
}
}
size_t lBufferSize = psText->length();
long lBytesSent = 1;
unsigned long lSummedBytesSent = 0;
while (lSummedBytesSent < lBufferSize and lBytesSent > 0)
{
lBytesSent = send(liConnection, psText->c_str() + lSummedBytesSent, lBufferSize - lSummedBytesSent, MSG_NOSIGNAL);
if (lBytesSent > 0)
{
lSummedBytesSent += lBytesSent;
}
}
Check the buffer size, you can do so by following this answer
How to find the socket buffer size of linux
In my case, the values are
Minimum = 4096 bytes ~ 4KB
Default = 16384 bytes ~ 16 KB
Maximum = 4022272 bytes ~ 3.835 MB
You can tweak the values net.core.rmem_max and net.core.wmem_max in /etc/sysctl.conf to increase the socket buffer size and reload with sysctl -p.
Source: http://www.runningunix.com/2008/02/increasing-socket-buffer-size-in-linux/
The send() call blocks until all of the data has been sent or buffered. If the program at the other end of the socket isn't reading and thus there is no flow of data, the write buffer at your end will fill up and send() will block. Chances are that when you tried to send a smaller amount of data it fit into the buffer.
See also this answer.
For TCP, the kernel has a fixed size buffer in which is stores unsent data. The size of this buffer is the current window size of the TCP session. Once this buffer is full any new send will be failed. This is a TCP flow control mechanism which prevents you from trying to send data faster than the receiver can consume the data while at the same time providing an automatic resend for lost data. The default window can be as small as 64K but can grow larger for high latency high bandwidth networks.
What you probably need to do is break the data up into smaller send blocks and then ensure you have a flow-off mechanism for when your send buffer is full.
Related
Can someone please explain, when exactly the read-function I use to get data from a TCP-socket does return?
I use the code below for reading from a measurement system. This system delivers data with a frequency of 15 Hz. READ_TIMEOUT_MS has a value of 200
Furthermore READ_BUFFER_SIZE has a value of 40000.
All works fine, but what happens is, read() returns 15 times a second with 1349 bytes read.
By reading Pitfall 5 in the following documentation I would have expected, that the buffer is filled up completely:
http://www.ibm.com/developerworks/library/l-sockpit/
Init:
sock=socket(AF_INET, SOCK_STREAM, 0);
if (socket < 0)
{
goto fail0;
}
struct sockaddr_in server;
server.sin_addr.s_addr = inet_addr(IPAddress);
server.sin_family = AF_INET;
server.sin_port = htons(Port);
if (connect(sock,(struct sockaddr *)&server, sizeof(server)))
{
goto fail1;
}
struct timeval tv;
tv.tv_sec = READ_TIMEOUT_MS / 1000;
tv.tv_usec = (READ_TIMEOUT_MS % 1000) * 1000;
if (setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv, sizeof(struct timeval)))
{
goto fail1;
}
return true;
fail1:
close(sock);
sock = -1;
fail0:
return false;
Read:
unsigned char buf[READ_BUFFER_SIZE];
int len = read(sock, buf, sizeof(buf));
if (len <= 0)
{
return NULL;
}
CBinaryDataStream* pData = new CBinaryDataStream(len);
pData->WriteToStream(buf, len);
return pData;
I hope this question is not a duplicate, because I searched for an answer before I asked.
Please let me know if you need some further information.
I suspect that you are using Linux. The manpage for read says:
On success, the number of bytes read is returned (zero indicates end
of file), and the file position is advanced by this number. It is not
an error if this number is smaller than the number of bytes requested;
TCP sockets model a byte-stream and not a block- or message-oriented protocol. Calling read on a socket returns if there are any data available in the application's buffer. In principle, the data arrives in the network card, is then transferred to the kernel space where it is processed by the kernel and the network stack. Finally, the read syscall gets the data from the kernel space and transfers it to user space.
When reading from a socket you have to expect an arbitrary number of bytes that can be read. A call to read returns as soon as there is anything in the read buffer or when an error occurred. You cannot predict or assume how many bytes may be available.
In addition, the call can return without reading anything because the OS has been interrupted. This happens quite often when debug or profile your application. You have to handle this in your application layer.
The complete receiver path is surprisingly complex when you want to have high data rates or low latency. The kernel and NICs implement many optimizations to e.g. spread load over cores, increase locality and offload processing to the NIC. Here are some additional links you may find interesting:
https://www.lmax.com/blog/staff-blogs/2016/05/06/navigating-linux-kernel-network-stack-receive-path/
https://blog.cloudflare.com/how-to-achieve-low-latency/
http://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data
http://syuu.dokukino.com/2013/05/linux-kernel-features-for-high-speed.html
As a test, I'm writing a series of byte arrays to a tcp socket from an Android application, and reading them in a C++ application.
Java
InetAddress address = InetAddress.getByName("192.168.0.2");
Socket socket = new Socket(address, 1300);
DataOutputStream out = new DataOutputStream(socket.getOutputStream())
...
if(count == 0) {
out.write(first, 0, first.length);
} else if(count == 1) {
out.write(second, 0, second.length);
}
C++
do {
iResult = recv(ClientSocket, recvbuf, 3, 0);
for (int i = 0; i < 3; i++) {
std::cout << (int)(signed char)recvbuf[i] << std::endl;
}
} while (iResult > 0);
As it stands, on the first receipt, recv[2] = -52, which I assume to be a junk value, as the output stream has not yet written the second byte array by the time I've received the first segment.
However, when I pause after the the ListenSocket has accepted the connection:
ClientSocket = accept(ListenSocket, NULL, NULL);
std::cin.ignore();
...giving the sender time to do both writes to the stream, recv[2] = 3, which is the first value of the second written byte array.
If I ultimately want to send and receive a constant stream of discrete arrays, how can I determine after I've received the last value of one array, whether the next value in the buffer is the first value of the next array or whether it's a junk value?
I've considered that udp is more suitable for sending a series of discrete data sets, but I need the reliability of tcp. I imagine that tcp is used in this way regularly, but it's not clear to me how to mitigate this issue.
EDIT:
In the actual application for which I'm writing this test, I do implement length prefixing. I don't think that's relevant though; even if I know I'm at the end of a data set, I need to know whether the next value on the buffer is junk or the beginning of the next set.
for (int i = 0; i < 3; i++)
The problem is here. It should be:
for (int i = 0; i < iResult; i++)
You're printing out data that you may not have received. This is the explanation of the 'junk value'.
You can't assume that recv() fills the buffer.
You must also check iResult for both -1 and zero before this loop, and take the appropriate actions, which are different in each case.
As you point out, TCP is stream-based, so there's no built-in way to say "here's a specific chunk of data". What you want to do is add your own "message framing". A simple way to do that is called "length prefixing". Where you first send the size of the data packet, and then the packet itself. Then the receiver will know when they've gotten all the data.
Sending side
send length of packet (as a known size -- say a 32-bit int)
send packet data
Receiving side
read length of packet
read that many bytes of data
process fully-received packet
Check out this article for more information: http://blog.stephencleary.com/2009/04/message-framing.html
I'm writing a TCP implementation, using UDP sockets. My initial algorithm will be based off TCP Reno. My question is, since packets are byte-sized anyways, would there be any significant downsides to implementing the sliding window using maxNum Packets rather than maxNum Bytes? Isn't it pretty much the same thing?
For some code example, I currently do a mmap() on my data, and then pack it into a map:
int
ReliableSender::Packetize(char* data, int numBytes)
{
int seqNum = 1;
int offset = 0;
char* payload = NULL;
unsigned int length = sizeof(struct sockaddr_in);
bool dataRemaining = true;
while(dataRemaining)
{
size_t packetSize;
(numBytes > MTU) ? packetSize = MTU : packetSize = numBytes;
memcpy(payload, data, packetSize);
Packet pac = {seqNum, 0, payload};
dataMap.insert(make_pair(seqNum, pac));
if(numBytes > MTU)
payload = &data[offset];
else
dataRemaining = false;
offset += MTU;
numBytes -= MTU;
seqNum++;
}
return 0;
}
struct Packet
{
int seqNum;
int ackNum;
char* payload;
};
My thought was that I could simply adjust the sliding window by increasing the number of "packets" I send without an ACK rather than a set number of bytes - is there anything wrong with this? This is for a very simple application; nothing that needs to be portable or placed into production anytime soon.
Yes. TCP is a byte-stream protocol. You can send one byte at a times you can therefore change the receive window by one the at a time. You therefore can't express the receive window in packets.
If packets are byte-size you're just wasting bandwidth.
So this is the first time I'm actually asking a question in here, although I have been using this site for ages!
My problem is a bit tricky. I'm trying to develop a client server application for sending large files, using UDP with my own error checking and flow control. Now, I've developed a fully-functioning server and client. Client requests for a specific file, server starts sending. The file is read in parts into a buffer to avoid having to read small bits of the file every time a packet is send, thus saving processing time. Packets consist of 1400 bytes of actual data + a header of 28 bytes (sequence numbers, ack numbers, checksum etc..).
So I had the basics down, a simple stop-and-wait protocol. Send packet and receive ack, before sending next packet.
To be able to implement a smarter flow control algorithm, for starters with just some windowing, I have to run the sending-part and receiving-ack part in two different threads. Now here's where I got into problems. This is my first time working with threads, so please bear with me.
My problem is that the file written from the packets on the client side is corrupt. Well, when testing with a small jpg file, the file is only corrupt 50% of the times, when testing with a MP4 file, it's always corrupt! So I guess maybe the thread somehow rearranges the order in which the packets are send? I use sequence numbers, so the problem must occur before assigning the sequence number to the packets...
I know for sure that the part where I split up the file is correct, and also where I reassemble it on the client side, since I have tested this before trying to implement the threading. It should also be noted that I copied the exact sending-part of the code into the sending-thread, and this also worked perfectly before putting it into a thread.. This is also why I'm just posting the threading-part of my code, since this is clearly what is creating the problem (and since the entire code of the project would take up a loooot of space)
My sending thread code:
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t condition_var = PTHREAD_COND_INITIALIZER;
static void *send_thread(void *){
if (file.is_open()) {
while(!file.reachedEnd()){
pthread_mutex_lock(& mutex);
if(seq <= upperwindow) {
int blocksize = file.getNextBlocksize();
senddata = new unsigned char[blocksize + 28];
Packet to_send;
to_send.data = new char[blocksize];
to_send.sequenceNumber = seq;
to_send.ackNumber = 0;
to_send.type = 55; // DATA
file.readBlock(*to_send.data);
createPacket(senddata, to_send, blocksize + 28);
if (server.sendToClient(reinterpret_cast<char*>(senddata), blocksize + 28) == -1)
perror("sending failed");
incrementSequenceNumber(seq);
/* free memory */
delete [] to_send.data;
delete [] senddata;
}
pthread_mutex_unlock(& mutex);
}
pthread_exit(NULL);
} else {
perror("file opening failed!");
pthread_exit(NULL);
}
}
My receiving ack thread code:
static void *wait_for_ack_thread(void *){
while(!file.reachedEnd()){
Packet ack;
if (server.receiveFromClient(reinterpret_cast<char*>(receivedata), 28) == -1) {
perror("error receiving ack");
} else {
getPacket(receivedata, ack, 28);
pthread_mutex_lock(& mutex);
incrementSequenceNumber(upperwindow);
pthread_mutex_unlock(& mutex)
}
}
pthread_exit(NULL);
}
All comments are very much appreciated! :)
EDIT:
Added code of the readBlock function:
void readBlock(char & in){
memcpy(& in, buffer + block_position, blocksize);
block_position = block_position + blocksize;
if(block_position == buffersize){
buf_position ++;
if(buf_position == buf_reads){
buffersize = filesize % buffersize;
}
fillBuffer();
block_position = 0;
}
if(blocksize < MAX_DATA_SIZE){
reached_end = true;
return;
}
if((buffersize - block_position) < MAX_DATA_SIZE){
blocksize = buffersize % blocksize;
}
}
Create an array that represents the status of the communication.
0 means unsent, or sent and receiver reported error. 1 means sending. 2 means sent, and ack gotten.
Allocate this array, and guard access to it with a mutex.
The sending thread keeps two pointers into the array -- "has been sent up to" and "should sent next". These are owned by the sending thread.
The ack thread simply gets ack packets, locks the array, and does the transition on the state.
The sending thread locks the array, checks if it can advance the "has been sent up to" pointer (or if it should resend old stuff). If it notices an error, it reduces the "should be sent next" pointer to point at it.
It then sees if it should send stuff next. If it should, it marks the node as "being sent", unlocks the array, and sends it.
If the sending thread did no work, and found nothing to do, it goes to sleep on a timeout, and possibly a "kick awake" by the ack thread.
Now, note that the client can get the packets sent by this in the wrong order, unless you limit it to having 1 packet in transit.
The connection status array does not have to be a literal array, but it is easier if you start with that and optimize later.
On the receiving end, you have to pay attention to the sequence number, as the packets can get there out of sequence. To test this, write a server that sends the packets in the wrong order on purpose, and ensure that the client manages to stitch it together properly.
I have a byte array like this:
lzo_bytep out; // my byte array
size_t uncompressedImageSize = 921600;
out = (lzo_bytep) malloc((uncompressedImageSize +
uncompressedImageSize / 16 + 64 + 3));
wrkmem = (lzo_voidp) malloc(LZO1X_1_MEM_COMPRESS);
// Now the byte array has 802270 bytes
r = lzo1x_1_compress(imageData, uncompressedImageSize,
out, &out_len, wrkmem);
How can I split it into smaller parts under 65,535 bytes (the byte array is one large image which I want to sent over UDP which has upper limit 65,535 bytes) and then join those small chunks back into a continuous array?
The problem with doing this is that the UDP packets can arrive out or order, or be dropped. Use TCP for this; that's what it's for.
You don't have to "split" the array. You just have to point into different parts of it.
Assuming you're using a typical UDP write() function, it takes several arguments. One of them is a pointer to the buffer and the other is the length.
If you want to get the first 65535 bytes, your buffer is at wrkmem and the length is 65535.
For the second 65535 bytes, your buffer is at wrkmem + 65535 and your length is 65535.
The third 65535 bytes, your buffer is at wrkmem + 2 * 65535 and your length is 65535.
Get it?
(That said, the other posters are correct. You should be using TCP).
On the other side, when you want to re-join the array, you must allocate enough memory for the whole thing, then use a copy function like memcpy() to copy the arriving chunks into their correct position. Remember that UDP may not deliver the pieces in order and may not deliver all of them.
You might wish to try a message based middleware like ØMQ and feed the entire compressed image as one message and have the middleware run asynchronously and manage redelivery at the fastest speed possible. It provides a BSD socket compatible API and so can be easy to migrate code over and allows you to easily swap between various underlying transport protocols as required.
Other message systems are available.
void my_free (void *data, void *hint)
{
free (data);
}
/* ... */
size_t uncompressedImageSize = 921600, compressedImageSize = 0;
size_t out_len = (uncompressedImageSize + uncompressedImageSize / 16 + 64 + 3);
lzo_bytep out = (lzo_bytep)malloc (out_len);
lzo_voidp wkrmem = (lzo_voidp)malloc (LZO1X_1_MEM_COMPRESS);
zmq_msg_t msg;
rc = lzo1x_1_compress (imageData, uncompressedImageSize,
out, &compressedImageSize, wrkmem);
assert (compressedImageSize > 0);
rc = zmq_msg_init_data (&msg, out, compressedImageSize, my_free, NULL);
assert (rc == 0);
/* Send the message to the socket */
rc = zmq_send (socket, &msg, 0);
assert (rc == 0);