I have a socket server receiving XML files each 500ms and sometimes it goes wrong concatenating more than 1 file as only one file.
do
{
char* buf = (char*)MALLOCZ(IP_BUF_SZ);
chrs_read = recv(sockfd, buf, IP_BUF_SZ, 0);
if (chrs_read > 0)
sBuffer.append(buf, chrs_read);
FREE(buf);
buf = NULL;
}
while (chrs_read > 0);
So, sometime chrs_read doesn't return me -1 to stop the receiving and save the file to start a new receiving.
Do I forget some configuration in the socket - it's async and non-blocking by default - and I supposed to keep using this way?
Thank you in advance
The problem is that all files are sent through the same connection, without having a delimiter between them. When the files are sent often, and there is some latency in the network, you can't know where a file ends and a new one begins.
Solutions:
Insert a delimiter between the files, such that you can close the file when you receive the delimiter, and open a new one. Note that the delimiter may be received anywhere inside of buf, or it could even, if the delimiter is longer than one byte, be received partially in one recv call, and the rest follows in the next recv call.
On the sending end, close the connection after sending the file and open a new one for the new file.
Related
I have an application of a client where I need to receive http "long running requests" from a server. I send a command, and after getting the header of the response, I have to just receive json data separated by \r\n until the connection is terminated.
I managed to adapt boost beast client example to send the message and receive the header and parse it and receive responses from the server. However, I failed at finding a way to serialize the data so that I could process the json messages.
The closest demonstration of the problem can be found in this relay example. In that example (p is a parser, sr is a serializer, input is a socket input stream and output is an socket output stream), after reading the http header, we have a loop that reads continuously from the server:
do
{
if(! p.is_done())
{
// Set up the body for writing into our small buffer
p.get().body().data = buf;
p.get().body().size = sizeof(buf);
// Read as much as we can
read(input, buffer, p, ec);
// This error is returned when buffer_body uses up the buffer
if(ec == error::need_buffer)
ec = {};
if(ec)
return;
// Set up the body for reading.
// This is how much was parsed:
p.get().body().size = sizeof(buf) - p.get().body().size;
p.get().body().data = buf;
p.get().body().more = ! p.is_done();
}
else
{
p.get().body().data = nullptr;
p.get().body().size = 0;
}
// Write everything in the buffer (which might be empty)
write(output, sr, ec);
// This error is returned when buffer_body uses up the buffer
if(ec == error::need_buffer)
ec = {};
if(ec)
return;
}
while(! p.is_done() && ! sr.is_done());
A few things I don't understand here:
We're done reading the header. Why do we need boost beast and not boost asio to read a raw tcp message? When I tried to do that (with both async_read/async_read_some) I got an infinite reads of zero size.
The documentation of parser says (at the end of the page) that a new instance is needed for every message, but I don't see that in the example.
Since tcp message reading is not working, is there a way to convert the parser/serializer data to some kind of string? Even write it to a text file in a FIFO manner, so that I could process it with some json library? I don't want to use another socket like the example.
The function boost::beast::buffers() failed to compile for the parser and the serializer, and for the parser there's no consume function, and the serializer's consume seems to be for particular http parts of the message, which fires an assert if I do it for body().
Besides that, I also failed at getting consistent chunks of data from the parser and the buffer with old-school std::copy. I don't seem to understand how to combine the data together to get the stream of data. Consuming the buffer with .consume() at any point while receiving data leads to need buffer error.
I would really appreciate someone explaining the logic of how all this should work together.
Where is buf? You could read directly into the std::string instead. Call string.resize(N), and set the pointer and size in the buffer_body::value_type to string.data() and string.size().
According to this solution to send out an image through TCP. Since the code is very elegant compared to other ways and both image and file are data, I believe that we can use almost the same code to send out a file.
So if I want to send a file from a client to a server.
On the client side
get file size
send file size
// Above steps will always work, so I will only show code after here
read file content into a buffer
char buf[size];
read(fs,buf,size);
send the buffer
int bytes = 0;
for (uint i = 0;i<size;i+=bytes)
{
if ((bytes = send(sock,buf+i,size-i,0))<0)
{
fprintf(stderr,"Can not send file\n");
close(fd);
return false;
}
fprintf(stderr,"bytes write = %d\n",bytes);
}
And on the server side
recv file size
recv stuff into a buffer with size from step 1
char buf[size];
int bytes=0;
for (uint i = 0;i<size;i+=bytes)
{
if ((bytes = recv(sock,buf+i,size-i,0))<0)
{
fprintf(stderr,"Can not receive file\n");
return false;
}
fprintf(stderr,"bytes read = %d\n",bytes);
}
write buffer to a file
fwrite(buf,sizeof(char),size,fs);
This code will compile and run.
When I send out an cpp binary file(24k) from client to server, since both client and server are on the same machine (OS X), this binary file will be received and can be executed.
But if the server forward the file back to the client, and client forward this file back to the server multiple times, this binary file will be corrupted. But the number of bytes sent and number of bytes received are the same, and the file size is still 24k.
I am wondering what is going wrong here.
Is this an OS bug?
Thanks,
Neither send(), nor recv(), guarantees that the number of bytes requested will actually be sent or received. In that case, the return value will still be positive, but less than the number of bytes that was requested in the system call.
This is extensively documented in the manual page documentation for send() and recv(). Please reread your operating system's documentation for these system call.
It is the application's responsibility to try again, to send or receive the remaining bytes.
This code assumes that the number of bytes that was sent is the number of bytes it requested to be sent. It does appear to handle recv()'s return status properly, but not send()'s. After a fewer number of bytes was sent, this code still assumes that the entire contents were sent or received, and the fwrite() system call will end up writing junk instead of the latter part of the file.
If both client and server are in the same folder, then in this case it is just like copying and pasting a file.
So when client send out a file, it will
open file
get file name/size + send name/size + send data
close file
On the server side,
get file name/size
open the same file again
get file content
close file
So the problem will occur on step 2 by causing a race condition.
i am trying to send data via tcp socket to a server. The idea behind that is a really simple chat programm.
The string I am trying to send looks like the following:
1:2:e9e633097ab9ceb3e48ec3f70ee2beba41d05d5420efee5da85f97d97005727587fda33ef4ff2322088f4c79e8133cc9cd9f3512f4d3a303cbdb5bc585415a00:2:xc_[z kxc_[z kxc_[z kxc_[==
As you can see there a few unprintable characters which I don't think are a problem here.
To send this data I am using the following code snippet.
bool tcp_client::send_data(string data)
{
if( send(sock , data.c_str(), strlen(data.c_str()) , 0) < 0)
{
perror("Send failed : ");
return false;
}
return true;
}
After a few minutes of trying things out I came up, that data.c_str() cuts my string of.
The result is:
1:2:e9e633097ab9ceb3e48ec3f70ee2beba41d05d5420efee5da85f97d97005727587fda33ef4ff2322088f4c79e8133cc9cd9f3512f4d3a303cbdb5bc585415a00:2:xc_[z
I think that there is some kind of null sequence inside my string which is a problem for the c_str() function.
Is there a way to send the whole string as I mentioned aboved without cutting it off?
Thanks.
Is there a way to send the whole string as I mentioned aboved without cutting it off?
What about:
send(sock , data.c_str(), data.size() , 0);
There are only two sane ways to send arbitrary data (such as a array of characters) over stream sockets:
On the server: close the socket after data was sent (like in ftp, http 0.9, etc). On the client - read until socket is closed in a loop.
On the server: prefix the data with fixed-length size (nowadays people usualy use 64 bit integers for size, watch out for endiannes). On the client - read the size first (in a loop!), than read the data until size bytes are read (in a loop).
Everything else is going to backfire sooner or later.
I am writing a simple server in C/C++. I have everything mostly complete, but there is one problem. The server fails to send the last three lines of a file to a client. I assume I am closing the socket connection prematurely, but my attempts to remedy this have failed. For example, calling
shutdown(clientSckt, SHUT_RDWR);
right before calling the close() method for the client socket. And adding a latency to the socket parameters like so:
struct linger l;
l.l_onoff = 1;
l.l_linger = 1;
setsockopt(clientSckt, SOL_SOCKET, SO_LINGER, &l, sizeof(l));
after it has been opened. But neither of these seem to work. The server writes everything with no errors, but the client is not receiving everything.
From vague memory:
a) if you want to use SO_LINGER, use close().
b) more robust is do a half shutdown
shutdown(clientSckt, SHUT_WR)
and then read() until you get a 0.
It turns out, I forgot to add the character length of the header to the length of the file I was sending over. Hence, the client was closing the connection before the server had sent everything over.
the key is that I send 4096 bytes but only 119 bytes aprox. carry useful information.
The 100 bytes ends with \r\n\r\n so in the client, when I read \r\n\r\n I want to stop receiving information from that string, and start over.
I don't know if I have to flush, or close the socket, or whatever...
They are sockets TCP.
In the client I do:
buf details[4096];
strcpy(details,"1");
strcat(details,"10/04/12");
strcat(details,"Kevin Fire");
strcat(detils,"abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde\r\n\r\n");
nbytes_sent = send(sock,(char *)details,sizeof(details),0);
On the other hand, the server...
char buf[20];
memset(buf,'\0',20);
while(!end){
nbytes_read=recv(sclient,(char *)ress,sizeof(ress),0);
if(strcmp(ress,"1")==0){
printf("Details: %s (%i)\n",buf,nbytes_read);
while(strcmp(buf,"\r\n\r\n") != 0){
nbytes_read=recv(sclient,(char *)buf,sizeof(buf),0);
cout.flush();
printf("Details: %s (%i)\n",buf,nbytes_read);
} }
if(strcmp(buf,"\r\n\r\n")==0) printf("The End\n");
cout.flush();
}
}
I just want to read a new "ress" and not being retrieving the rest of bytes that are not useful.
Thanks in advance.
If you mean you want to discard rest of data and read a new block you can't do it with TCP because it is stream oriented and do not have a concept of message and have no idea about the rest of message that you want to ignore. if you mean something else please describe it more.
but beside that why you use nbytes_sent = send(sock,(char *)details,sizeof(details),0); when only data until \r\n' is important. you can usenbytes_sent = send(sock,(char *)details,strlen(details),0);` that only send valid data and reduce garbage that you send over network and you don't need to start over in the server??
I'm not sure if I'm following your question entirely, but it appears that you can just set end=true whenever you detect the end of the message you're receiving:
char buf[20];
memset(buf,'\0',20);
while(!end)
{
nbytes_read=recv(sclient,(char *)ress,sizeof(ress),0);
if(strcmp(ress,"1")==0)
{
printf("Details: %s (%i)\n",buf,nbytes_read);
while(strcmp(buf,"\r\n\r\n") != 0)
{
nbytes_read=recv(sclient,(char *)buf,sizeof(buf),0);
cout.flush();
printf("Details: %s (%i)\n",buf,nbytes_read);
}
}
if(strcmp(buf,"\r\n\r\n")==0)
{
end = true; // <--- This should do it for you, right?
printf("The End\n");
}
cout.flush();
}
However, if the client is still connected and writing the next message to the socket, then you just need to start reading the next message. So what happens with the client once the message is written? Does it start writing the next message or does it close the socket connection?
In addition: you need to take what's in your buffer and create a message from it. When the current message is done, then consider creating a new message with the contents of the buffer from the next message.
If you design your protocol like HTTP 1.0, where each request opens a new socket, then you close the socket after you've read enough.
Otherwise, you need to keep on reading until you skipped the entire 4096 bytes. The easiest thing to do is to keep on reading till you get 4096 bytes in the first place (you'll need to call recv in a loop), and then parse the contents of the buffer. Then again, you might be better off redesigning your protocol.
My thought would be to just to peek at the first x chars.
The 4 chars could be the size of the buffer expected.
So for example if your message is:
abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde\r\n\r\n
It's (to use your schematic) 100 bytes, plus the \r\n\r\n. So it's 100 + 4, so 104.
I would send char(104) at the beginning of your string, as a sentinal value
then the string with it right after so it'd appear similar to
char(104)abcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcdeabcde\r\n\r\n
Then use recv's peek_MSG ability to get the first char, make your string size, read only that value and whatever's left get's discarded by a socket flush call you make.