I'm having trouble with receiving data over a network using Winsock2, with Windows. I'm trying to use a simple client and server system to implement a file transfer program. With our current code, the last packet coming in doesn't get appended to the file because it's not the size of the buffer. So, the file transfer doesn't quite completely, throws an error, and breaks. It's not always the very last packet, sometimes it's earlier.
Here is a snippet of the Server code:
int iResult;
ifstream sendFile(path, ifstream::binary);
char* buf;
if (sendFile.is_open()) {
printf("File Opened!\n");
// Sends the file
while (sendFile.good()) {
buf = new char[1024];
sendFile.read(buf, 1024);
iResult = send(AcceptSocket, buf, (int)strlen(buf)-4, 0 );
if (iResult == SOCKET_ERROR) {
wprintf(L"send failed with error: %d\n", WSAGetLastError());
closesocket(AcceptSocket);
WSACleanup();
return 1;
}
//printf("Bytes Sent: %d\n", iResult);
}
sendFile.close();
}
And here is a snippet of the Client code:
int iResult;
int recvbuflen = DEFAULT_BUFLEN;
char recvbuf[DEFAULT_BUFLEN] = "";
do {
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
if ( iResult > 0){
printf("%s",recvbuf);
myfile.write(recvbuf, iResult);
}
else if ( iResult == 0 ) {
wprintf(L"Connection closed\n");
} else {
wprintf(L"recv failed with error: %d\n", WSAGetLastError());
}
} while( iResult > 0 );
myfile.close();
When trying to transfer a file that is a dictionary, it can break at random times. For example, one run broke early in the S's and appended weird characters to the end, which isn't rare:
...
sayable
sayer
sayers
sayest
sayid
sayids
saying
sayings
╠╠╠╠╠╠╠╠recv failed with error: 10054
What can I do to handle these errors and weird characters?
The error is happening on the server side. You're getting a "Connection reset by peer" error.
This line - buf = new char[1024]; - is clearly problematic and is likely causing the server to crash because it runs out of memory. There is no clean up happening. Start by adding the appropriate delete statement, probably best placed after the send call. If that doesn't fix it I would use a small test file and step through that while loop in the server code.
P.S. A better solution than using new and delete in your loop is to reuse the existing buff. The compiler might optimize this mistake out but if it doesn't you're severely hindering the applications performance. I think you actually should just move buf = new char[1024]; outside of the loop. buf is a char pointer so read will continue to overwrite the contents of buf if you pass it buf. Re allocating the buffer over and over is not good.
With regard to the error MSDN says:
An existing connection was forcibly closed by the remote host. This normally results if the peer application on the remote host is suddenly stopped, the host is rebooted, the host or remote network interface is disabled, or the remote host uses a hard close (see setsockopt for more information on the SO_LINGER option on the remote socket). This error may also result if a connection was broken due to keep-alive activity detecting a failure while one or more operations are in progress. Operations that were in progress fail with WSAENETRESET. Subsequent operations fail with WSAECONNRESET.
First, using the new operator in a loop might not be good, especially without a corresponding delete. I'm not a C++ expert, though (only C) but I think it is worth checking.
Second, socket error 10054 is "connection reset by peer" which tells me that the server is not performing what is called a graceful close on the socket. With a graceful close, WinSock will wait until all pending data has been received by the other side before sending the FIN message that breaks the connection. It is likely that your server is is just closing immediately after the final buffer is given to WinSock without any time for it to get transmitted. You'll want to look into the SO_LINGER socket options -- they explain the graceful vs non-graceful closes.
Simply put, you either need to add your own protocol to the connection so that the client can acknowledge receipt of the final data block, or the server side needs to call setsocketopt() to set a SO_LINGER timeout so that WinSock will wait for the TCP/IP acknowledgement from the client side for the final block of data before issuing the socket close across the network. If you don't do at least ONE of those things, then this problem will occur.
There's also another article about that here that you might want to look at:
socket error 10054
Good luck!
Related
I want to develop a client server app and I want to make it as robust as possible. There are multiple questions come up for me, and I just can't find an unambiguous answer on the internet.
Let's say, that the server is running on a while(TRUE) loop and check for command existance is it's commands queue, if there is one, it sends it, if there isn't one, it just continue to the head of the loop.
But what if the other end went down, or there is a connection error between the two, what happen to the socket value, does it become INVALID_SOCKET?
while (TRUE) {
if (ReqQueue->size() != 0 && ReqQueue->front() != string("STOP")) { // there is some command in the ReqQueue which is NOT STOP.
int sent = send(ClientSocket, ReqQueue->front().c_str(), (int)strlen(ReqQueue->front().c_str()), 0);
if (sent == (int)strlen(ReqQueue->front().c_str()))
ReqQueue->pop(); // Next Command.
else if (int err = WSAGetLastError() == WSAETIMEDOUT){
shutdown(ClientSocket, SD_BOTH);
closesocket(ClientSocket);
return;
}
else
continue;
}
else if (ReqQueue->size() == 0) {
continue;
}
else if(ReqQueue->front() == string("STOP"))
{
if (send(ClientSocket, "STOP", strlen("STOP"), 0) == strlen("STOP")) {
/*Message received indication from target*/
shutdown(ClientSocket, SD_BOTH);
closesocket(ClientSocket);
return;
}
}
}
shutdown(ClientSocket, SD_BOTH);
closesocket(ClientSocket);
return 0;
that's the source :)
what I want to ask is, there is a better way to implement the above goal, maybe I can change the while loop condition to something like while(the socket is OK) or while(there is still a connection).
what happen to the socket value
Nothing. A send() on that socket will eventually fail, and a recv() on it will deliver zero or -1, but the socket itself remains open, and the variable value is unaffected. There is no magic.
does it become INVALID_SOCKET?
No.
For me, better idea would be when you receive a request to your server from any client, just create a new thread and assign the task to that thread. By doing that you can make your server parallel processing of client request and work on multiple request from multiple client. Hence no client need to wait for server to complete the request of a client already submitted a request. If you implement like this you don’t need to bother what will happened if a connection broken. In normal case if a correction broken you will get this information in your server while sending the reply to the client and you can mark that process as failed and log it into server log.
Sorry for improper description of my question.
What my program do is that connect a server, send some data and close connection. I simplified my code as below:
WSAStartup(MAKEWORD(2, 2), &wsaData);
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
connect(s, (const sockaddr*)&dstAddr, sizeof(dstAddr));
send(s, (const char*)pBuffer, fileLen, 0);
shutdown(s, SD_SEND);
closesocket(s);
WSACleanup();
Only partial data was received by server before found a RST causing communication shutdown.
I wrote a simulate server program to accept connection and receive data, but the simulator could get all data. Because I couldn't access server's source code, I didn't know if something made wrong in it. Is there a way I can avoid this error by adding some code in client, or can I prove that there is something wrong in server program?
Setting socket's linger option can fix the bug. But I need to give a magic number for the value of linger time.
linger l;
l.l_onoff = 1;
l.l_linger = 30;
setsockopt(socket, SOL_SOCKET, SO_LINGER, (const char*)&l, sizeof(l));
WSASend returns before sending data to device actually
Correct.
I created a non-blocking socket and tried to send data to server.
WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED)
No you didn't. You created an overlapped I/O socket.
After executed, returnValue was SOCKET_ERROR and WSAGetLastError() returned WSA_IO_PENDING. Then I called WSAWaitForMultipleEvents to wait for event being set. After it returned WSA_WAIT_EVENT_0, I called WSAGetOverlappedResult to get actual sent data length and it is the same value with I sent.
So all the data got transferred into the socket send buffer.
I called WSASocket first, then WSASend/WSAWaitForMultipleEvents/WSAGetOverlappedResult several times to send a bunch of data, and closesocket at the end.
So at the end of that process all the data and the close had been transferred to the socket send buffer.
But server couldn't receive all data, I used Wireshark to view tcp packets and found that client sent RST before all packet were sent out.
That could be for a number of reasons none of which is determinable without seeing some code.
If I slept 1 minute before calling closesocket, then server would receive all data.
Again this would depend on what else had happened in your code.
It seemed like that WSASend/WSAWaitForMultipleEvents/WSAGetOverlappedResult returned before sending data to server actually.
Correct.
The data were saved in buffer and waiting for being sent out.
Correct.
When I called closesocket, communication was shut down.
Correct.
They didn't work as my expectation.
So your expectation was wrong.
What did I go wrong? This problem only occurred in specific PCs, the application run well in others.
Impossible to answer without seeing some code. The usual reason for issuing an RST is that the peer had written data to a connection that you had already closed: in other words, an application protocol error; but there are other possibilities.
MSDN provides following code:
int iResult;
// Receive until the peer closes the connection
do {
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
if ( iResult > 0 )
printf("Bytes received: %d\n", iResult);
else if ( iResult == 0 )
printf("Connection closed\n");
else
printf("recv failed: %d\n", WSAGetLastError());
} while( iResult > 0 );
iResult store the number of bytes received! In my case I can't do it like this because the receive hangs if nothing was received (or end was reached) -> so the exit condition never match!
Something wrong and/or why the recv hangs here?
Greets
It's because sockets are, by default, blocking. This means that calls to e.g. recv will block until something is received. You can use the ioctlsocket function to make the socket non-blocking.
You do have to be prepared that recv can return with an WSAEWOULDBLOCK error if nothing is available to be received. Or use polling functions such as select to know when you have data that can be received. If you don't want to poll, search for "asynchronous sockets" on MSDN to find both server and client examples.
Actually, if there no incoming data is available at the socket, the recv blocks till data arrive. You can use select() to determine when more data arrives and then use recv() to read it.
the receive hangs if nothing was received
Correct.
(or end was reached)
Incorrect. It returns zero under that circumstance. However 'end is reached' means that the peer has closed the connection. Possibly you have some other definition in mind?
I suspect your problem is that the peer isn't closing the connection and you are still expecting a zero. It doesn't work that way.
I am writing a simple server in C/C++. I have everything mostly complete, but there is one problem. The server fails to send the last three lines of a file to a client. I assume I am closing the socket connection prematurely, but my attempts to remedy this have failed. For example, calling
shutdown(clientSckt, SHUT_RDWR);
right before calling the close() method for the client socket. And adding a latency to the socket parameters like so:
struct linger l;
l.l_onoff = 1;
l.l_linger = 1;
setsockopt(clientSckt, SOL_SOCKET, SO_LINGER, &l, sizeof(l));
after it has been opened. But neither of these seem to work. The server writes everything with no errors, but the client is not receiving everything.
From vague memory:
a) if you want to use SO_LINGER, use close().
b) more robust is do a half shutdown
shutdown(clientSckt, SHUT_WR)
and then read() until you get a 0.
It turns out, I forgot to add the character length of the header to the length of the file I was sending over. Hence, the client was closing the connection before the server had sent everything over.
I have a code written in C/C++ that look like this:
while(1)
{
//Accept
struct sockaddr_in client_addr;
int client_fd = this->w_accept(&client_addr);
char client_ip[64];
int client_port = ntohs(client_addr.sin_port);
inet_ntop(AF_INET, &client_addr.sin_addr, client_ip, sizeof(client_ip));
//Listen first string
char firststring[512];
memset(firststring,0,512);
if(this->recvtimeout(client_fd,firststring,sizeof(firststring),u->timeoutlogin) < 0){
close(client_fd);
}
if(strcmp(firststring,"firststr")!=0)
{
cout << "Disconnected!" << endl;
close(client_fd);
continue;
}
//Send OK first string
send(client_fd, "OK", 2, 0);
//Listen second string
char secondstring[512];
memset(secondstring,0,512);
if(this->recvtimeout(client_fd,secondstring,sizeof(secondstring),u->timeoutlogin) < 0){
close(client_fd);
}
if(strcmp(secondstring,"secondstr")!=0)
{
cout << "Disconnected!!!" << endl;
close(client_fd);
continue;
}
//Send OK second string
send(client_fd, "OK", 2, 0);
}
}
So, it's dossable.
I've write a very simple dos script in perl that takedown the server.
#Evildos.pl
use strict;
use Socket;
use IO::Handle;
sub dosfunction
{
my $host = shift || '192.168.4.21';
my $port = 1234;
my $firststr = 'firststr';
my $secondstr = 'secondstr';
my $protocol = getprotobyname('tcp');
$host = inet_aton($host) or die "$host: unknown host";
socket(SOCK, AF_INET, SOCK_STREAM, $protocol) or die "socket() failed: $!";
my $dest_addr = sockaddr_in($port,$host);
connect(SOCK,$dest_addr) or die "connect() failed: $!";
SOCK->autoflush(1);
print SOCK $firststr;
#sleep(1);
print SOCK $secondstr;
#sleep(1);
close SOCK;
}
my $i;
for($i=0; $i<30;$i++)
{
&dosfunction;
}
With a loop of 30 times, the server goes down.
The question is: is there a method, a system, a solution that can avoid this type of attack?
EDIT: recvtimeout
int recvtimeout(int s, char *buf, int len, int timeout)
{
fd_set fds;
int n;
struct timeval tv;
// set up the file descriptor set
FD_ZERO(&fds);
FD_SET(s, &fds);
// set up the struct timeval for the timeout
tv.tv_sec = timeout;
tv.tv_usec = 0;
// wait until timeout or data received
n = select(s+1, &fds, NULL, NULL, &tv);
if (n == 0){
return -2; // timeout!
}
if (n == -1){
return -1; // error
}
// data must be here, so do a normal recv()
return recv(s, buf, len, 0);
}
I don't think there is any 100% effective software solution to DOS attacks in general; no matter what you do, someone could always throw more packets at your network interface than it can handle.
In this particular case, though, it looks like your program can only handle one connection at a time -- that is, incoming connection #2 won't be processed until connection #1 has completed its transaction (or timed out). So that's an obvious choke point -- all an attacker has to do is connect to your server and then do nothing, and your server is effectively disabled for (however long your timeout period is).
To avoid that you would need to rewrite the server code to handle multiple TCP connections at once. You could either do that by switching to non-blocking I/O (by passing O_NONBLOCK flag to fcntl()), and using select() or poll() or etc to wait for I/O on multiple sockets at once, or by spawning multiple threads or sub-processes to handle incoming connections in parallel, or by using async I/O. (I personally prefer the first solution, but all can work to varying degrees). In the first approach it is also practical to do things like forcibly closing any existing sockets from a given IP address before accepting a new socket from that IP address, which means that any given attacking computer could only tie up a maximum of one socket on your server at a time, which would make it harder for that person to DOS your machine unless he had access to a number of client machines.
You might read this article for more discussion about handling many TCP connections at the same time.
The main issue with DOS and DDOS attacks is that they play on your weakness: namely the fact that there is a limited memory / number of ports / processing resources that you can use to provide the service. Even if you have infinite scalability (or close) using something like the Amazon farms, you'll probably want to limit it to avoid the bill going through the roof.
At the server level, your main worry should be to avoid a crash, by imposing self-preservation limits. You can for example set a maximum number of connections that you know you can handle and simply refuse any other.
Full strategies will include specialized materials, like firewalls, but there is always a way to play them and you will have to live with that.
For example of nasty attacks, read about Slow Loris on wikipedia.
Slowloris tries to keep many connections to the target web server open and hold them open as long as possible. It accomplishes this by opening connections to the target web server and sending a partial request. Periodically, it will send subsequent HTTP headers, adding to—but never completing—the request. Affected servers will keep these connections open, filling their maximum concurrent connection pool, eventually denying additional connection attempts from clients.
There are many variants of DOS attacks, so a specific answer is quite difficult.
Your code leaks a filehandle when it succeeds, this will eventually make you run out of fds to allocate, making accept() fail.
close() the socket when you're done with it.
Also, to directly answer your question, there is no solution for DOS caused by faulty code other than correcting it.
This isn't a cure-all for DOS attacks, but using non-blocking sockets will definitely help for scalability. And if you can scale-up, you can mitigate many DOS attacks. This design changes includes setting both the listen socket used in accept calls and the client connection sockets to non-blocking.
Then instead of blocking on a recv(), send(), or an accept() call, you block on either a poll, epoll, or select call - then handle that event for that connection as much as you are able to. Use a reasonable timeout (e.g. 30 seconds) such that you can wake up from polling call to sweep and close any connections that don't seem to be progressing through your protocol chain.
This basically requires every socket to have it's own "connection" struct that keeps track of the state of that connection with respect to the protocol you implement. It likely also means keeping a (hash) table of all sockets so they can be mapped to their connection structure instance. It also means "sends" are non-blocking as well. Send and recv can return partial data amounts anyway.
You can look at an example of a non-blocking socket server on my project code here. (Look around line 360 for the start of the main loop in Run method).
An example of setting a socket into non-blocking state:
int SetNonBlocking(int sock)
{
int result = -1;
int flags = 0;
flags = ::fcntl(sock, F_GETFL, 0);
if (flags != -1)
{
flags |= O_NONBLOCK;
result = fcntl(sock , F_SETFL , flags);
}
return result;
}
I would use boost::asio::async_connector from boost::asio functionality to create multiple connection handlers (works both on single and multi-threaded environment). In the single threaded case, you just need to run from time to time boost::asio::io_service::run in order to make sure communications have time to be processed
The reason why you want to use asio is because its very good at handling asynchronous communication logic, so it won't block (as in your case) if a connection gets blocked. You can even arrange how much processing you want to devote to opening new connections, while keep serving existing ones