We are developing an application in which we are using a WinSock-based sime socket approach to communicate with an outside module. Our requirement is to make sure that the connection will always be on, so for that reason, we continuously retry to connect every 1 minute whenever we get disconnected.
Our problem starts here. We have observered that on every retry of socket reconnect, it is leaking exactly two Windows handles. We have tried so many options, but none of them are working. Which handles could be leaking, and how could we go about identifying the culprit?
Following is the code that we are using right now:
bool CSocketClass::ConnectToServer(int nLineNo)
{
string strIPAddress;
int nPortNo;
SOCKET* l_ClientSocket;
int ConnectionResult;
//----------------------
// Create a SOCKET for connecting to server
if (nLineNo == 1)
{
m_objLine1.m_ClientSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
strIPAddress = m_objLine1.m_strIPAddress;
nPortNo = m_objLine1.m_nPortNo;
l_ClientSocket = &(m_objLine1.m_ClientSocket);
}
else
{
m_objLine2.m_ClientSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
strIPAddress = m_objLine2.m_strIPAddress;
nPortNo = m_objLine2.m_nPortNo;
l_ClientSocket = &(m_objLine2.m_ClientSocket);
}
if(INVALID_SOCKET == *l_ClientSocket)
{
return false;
}
//----------------------
// The sockaddr_in structure specifies the address family,
// IP address, and port of the server to be connected to.
sockaddr_in clientService;
clientService.sin_family = AF_INET;
clientService.sin_addr.s_addr = inet_addr( strIPAddress.c_str() );
clientService.sin_port = htons( nPortNo );
//----------------------
// Connect to server.
ConnectionResult = connect( *l_ClientSocket, (SOCKADDR*) &clientService, sizeof(clientService) ) ; if (ConnectionResult == SOCKET_ERROR)
{
if (nLineNo == 1)
{
//ERROR in line1
}
else
{
//ERROR in line2
}
return false;
}
else
//In case of successful connection
{
//Other actions
}
return true;
}
Try the free Process Explorer from Microsoft. It will display all the open handles for a process along with information such as name (for file, mutex, event, etc. handles). It will also highlight newly created and closed handles, so if you step through a loop of your code and refresh the display, you can see the exact handles that were leaked.
Let's say you acquired socket correctly:
m_objLine1.m_ClientSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)
m_objLine1.m_ClientSocket != INVALID_SOCKET // true
but then, you can't connect, so
ConnectionResult = connect( *l_ClientSocket, (SOCKADDR*) &clientService,
sizeof(clientService) )
ConnectionResult == SOCKET_ERROR // true
in that case, you should close that acquired socket handle:
closesocket(m_objLine1.m_ClientSocket);
You have two lines, so I guess that you call this function twice, once for each line, so
that's why two leaked handles.
I would suggest that you try Intel Parallel Inspector in order to identify the memory leaks and where they are occurring.
There is a trial download if you wish to try it.
A simple way to find handle leaks is to log everything.
Every time you obtain a handle, log that you obtained it, as well as any other details about the circumstances. Every time you release a handle, log that you released it. Include both times the actual handle (just some hex).
Then you get a log that looks like this (just for example):
Obtained handle 0xf000 (nLineNo = 5)
Obtained handle 0xb000 (nLineNo = 6)
Obtained handle 0xd0d0 (nLineNo = 7)
Released handle 0xf000
Released handle 0xb000
Picking through this by hand, you can see that you obtained handle 0xd0d0 when nLineNo was 7, and it never got released. It's not much but it does help, and if the going gets tough, you can even try logging stack traces at each obtain/release. Also, if the log is always reliably produced like that, you can start putting in breakpoints based on the actual values (e.g. break at a point in the program when the handle is 0xd0d0, so you can see what's happening to it).
If it's more practical, you can start wrapping your handles inside the program itself, e.g. a std::set of all obtained handles, along with any details about when they were obtained, and you can effectively start hacking your program to keep track of what it's doing (then undo all your changes once you fixed it).
Hope that helps - it's part of the reason I tend to at least keep a std::set of everything I obtain, so if worst comes to worst you can iterate over them on shutdown and release them all (and log a big "FIX THIS!" message!)
Try adding a shutdown(SD_BOTH) on the socket handles after the closesocket(); Also, try adding a Sleep for about 100ms (only for a test) and see how it goes.
Related
I'm having trouble with receiving data over a network using Winsock2, with Windows. I'm trying to use a simple client and server system to implement a file transfer program. With our current code, the last packet coming in doesn't get appended to the file because it's not the size of the buffer. So, the file transfer doesn't quite completely, throws an error, and breaks. It's not always the very last packet, sometimes it's earlier.
Here is a snippet of the Server code:
int iResult;
ifstream sendFile(path, ifstream::binary);
char* buf;
if (sendFile.is_open()) {
printf("File Opened!\n");
// Sends the file
while (sendFile.good()) {
buf = new char[1024];
sendFile.read(buf, 1024);
iResult = send(AcceptSocket, buf, (int)strlen(buf)-4, 0 );
if (iResult == SOCKET_ERROR) {
wprintf(L"send failed with error: %d\n", WSAGetLastError());
closesocket(AcceptSocket);
WSACleanup();
return 1;
}
//printf("Bytes Sent: %d\n", iResult);
}
sendFile.close();
}
And here is a snippet of the Client code:
int iResult;
int recvbuflen = DEFAULT_BUFLEN;
char recvbuf[DEFAULT_BUFLEN] = "";
do {
iResult = recv(ConnectSocket, recvbuf, recvbuflen, 0);
if ( iResult > 0){
printf("%s",recvbuf);
myfile.write(recvbuf, iResult);
}
else if ( iResult == 0 ) {
wprintf(L"Connection closed\n");
} else {
wprintf(L"recv failed with error: %d\n", WSAGetLastError());
}
} while( iResult > 0 );
myfile.close();
When trying to transfer a file that is a dictionary, it can break at random times. For example, one run broke early in the S's and appended weird characters to the end, which isn't rare:
...
sayable
sayer
sayers
sayest
sayid
sayids
saying
sayings
╠╠╠╠╠╠╠╠recv failed with error: 10054
What can I do to handle these errors and weird characters?
The error is happening on the server side. You're getting a "Connection reset by peer" error.
This line - buf = new char[1024]; - is clearly problematic and is likely causing the server to crash because it runs out of memory. There is no clean up happening. Start by adding the appropriate delete statement, probably best placed after the send call. If that doesn't fix it I would use a small test file and step through that while loop in the server code.
P.S. A better solution than using new and delete in your loop is to reuse the existing buff. The compiler might optimize this mistake out but if it doesn't you're severely hindering the applications performance. I think you actually should just move buf = new char[1024]; outside of the loop. buf is a char pointer so read will continue to overwrite the contents of buf if you pass it buf. Re allocating the buffer over and over is not good.
With regard to the error MSDN says:
An existing connection was forcibly closed by the remote host. This normally results if the peer application on the remote host is suddenly stopped, the host is rebooted, the host or remote network interface is disabled, or the remote host uses a hard close (see setsockopt for more information on the SO_LINGER option on the remote socket). This error may also result if a connection was broken due to keep-alive activity detecting a failure while one or more operations are in progress. Operations that were in progress fail with WSAENETRESET. Subsequent operations fail with WSAECONNRESET.
First, using the new operator in a loop might not be good, especially without a corresponding delete. I'm not a C++ expert, though (only C) but I think it is worth checking.
Second, socket error 10054 is "connection reset by peer" which tells me that the server is not performing what is called a graceful close on the socket. With a graceful close, WinSock will wait until all pending data has been received by the other side before sending the FIN message that breaks the connection. It is likely that your server is is just closing immediately after the final buffer is given to WinSock without any time for it to get transmitted. You'll want to look into the SO_LINGER socket options -- they explain the graceful vs non-graceful closes.
Simply put, you either need to add your own protocol to the connection so that the client can acknowledge receipt of the final data block, or the server side needs to call setsocketopt() to set a SO_LINGER timeout so that WinSock will wait for the TCP/IP acknowledgement from the client side for the final block of data before issuing the socket close across the network. If you don't do at least ONE of those things, then this problem will occur.
There's also another article about that here that you might want to look at:
socket error 10054
Good luck!
I have a code written in C/C++ that look like this:
while(1)
{
//Accept
struct sockaddr_in client_addr;
int client_fd = this->w_accept(&client_addr);
char client_ip[64];
int client_port = ntohs(client_addr.sin_port);
inet_ntop(AF_INET, &client_addr.sin_addr, client_ip, sizeof(client_ip));
//Listen first string
char firststring[512];
memset(firststring,0,512);
if(this->recvtimeout(client_fd,firststring,sizeof(firststring),u->timeoutlogin) < 0){
close(client_fd);
}
if(strcmp(firststring,"firststr")!=0)
{
cout << "Disconnected!" << endl;
close(client_fd);
continue;
}
//Send OK first string
send(client_fd, "OK", 2, 0);
//Listen second string
char secondstring[512];
memset(secondstring,0,512);
if(this->recvtimeout(client_fd,secondstring,sizeof(secondstring),u->timeoutlogin) < 0){
close(client_fd);
}
if(strcmp(secondstring,"secondstr")!=0)
{
cout << "Disconnected!!!" << endl;
close(client_fd);
continue;
}
//Send OK second string
send(client_fd, "OK", 2, 0);
}
}
So, it's dossable.
I've write a very simple dos script in perl that takedown the server.
#Evildos.pl
use strict;
use Socket;
use IO::Handle;
sub dosfunction
{
my $host = shift || '192.168.4.21';
my $port = 1234;
my $firststr = 'firststr';
my $secondstr = 'secondstr';
my $protocol = getprotobyname('tcp');
$host = inet_aton($host) or die "$host: unknown host";
socket(SOCK, AF_INET, SOCK_STREAM, $protocol) or die "socket() failed: $!";
my $dest_addr = sockaddr_in($port,$host);
connect(SOCK,$dest_addr) or die "connect() failed: $!";
SOCK->autoflush(1);
print SOCK $firststr;
#sleep(1);
print SOCK $secondstr;
#sleep(1);
close SOCK;
}
my $i;
for($i=0; $i<30;$i++)
{
&dosfunction;
}
With a loop of 30 times, the server goes down.
The question is: is there a method, a system, a solution that can avoid this type of attack?
EDIT: recvtimeout
int recvtimeout(int s, char *buf, int len, int timeout)
{
fd_set fds;
int n;
struct timeval tv;
// set up the file descriptor set
FD_ZERO(&fds);
FD_SET(s, &fds);
// set up the struct timeval for the timeout
tv.tv_sec = timeout;
tv.tv_usec = 0;
// wait until timeout or data received
n = select(s+1, &fds, NULL, NULL, &tv);
if (n == 0){
return -2; // timeout!
}
if (n == -1){
return -1; // error
}
// data must be here, so do a normal recv()
return recv(s, buf, len, 0);
}
I don't think there is any 100% effective software solution to DOS attacks in general; no matter what you do, someone could always throw more packets at your network interface than it can handle.
In this particular case, though, it looks like your program can only handle one connection at a time -- that is, incoming connection #2 won't be processed until connection #1 has completed its transaction (or timed out). So that's an obvious choke point -- all an attacker has to do is connect to your server and then do nothing, and your server is effectively disabled for (however long your timeout period is).
To avoid that you would need to rewrite the server code to handle multiple TCP connections at once. You could either do that by switching to non-blocking I/O (by passing O_NONBLOCK flag to fcntl()), and using select() or poll() or etc to wait for I/O on multiple sockets at once, or by spawning multiple threads or sub-processes to handle incoming connections in parallel, or by using async I/O. (I personally prefer the first solution, but all can work to varying degrees). In the first approach it is also practical to do things like forcibly closing any existing sockets from a given IP address before accepting a new socket from that IP address, which means that any given attacking computer could only tie up a maximum of one socket on your server at a time, which would make it harder for that person to DOS your machine unless he had access to a number of client machines.
You might read this article for more discussion about handling many TCP connections at the same time.
The main issue with DOS and DDOS attacks is that they play on your weakness: namely the fact that there is a limited memory / number of ports / processing resources that you can use to provide the service. Even if you have infinite scalability (or close) using something like the Amazon farms, you'll probably want to limit it to avoid the bill going through the roof.
At the server level, your main worry should be to avoid a crash, by imposing self-preservation limits. You can for example set a maximum number of connections that you know you can handle and simply refuse any other.
Full strategies will include specialized materials, like firewalls, but there is always a way to play them and you will have to live with that.
For example of nasty attacks, read about Slow Loris on wikipedia.
Slowloris tries to keep many connections to the target web server open and hold them open as long as possible. It accomplishes this by opening connections to the target web server and sending a partial request. Periodically, it will send subsequent HTTP headers, adding to—but never completing—the request. Affected servers will keep these connections open, filling their maximum concurrent connection pool, eventually denying additional connection attempts from clients.
There are many variants of DOS attacks, so a specific answer is quite difficult.
Your code leaks a filehandle when it succeeds, this will eventually make you run out of fds to allocate, making accept() fail.
close() the socket when you're done with it.
Also, to directly answer your question, there is no solution for DOS caused by faulty code other than correcting it.
This isn't a cure-all for DOS attacks, but using non-blocking sockets will definitely help for scalability. And if you can scale-up, you can mitigate many DOS attacks. This design changes includes setting both the listen socket used in accept calls and the client connection sockets to non-blocking.
Then instead of blocking on a recv(), send(), or an accept() call, you block on either a poll, epoll, or select call - then handle that event for that connection as much as you are able to. Use a reasonable timeout (e.g. 30 seconds) such that you can wake up from polling call to sweep and close any connections that don't seem to be progressing through your protocol chain.
This basically requires every socket to have it's own "connection" struct that keeps track of the state of that connection with respect to the protocol you implement. It likely also means keeping a (hash) table of all sockets so they can be mapped to their connection structure instance. It also means "sends" are non-blocking as well. Send and recv can return partial data amounts anyway.
You can look at an example of a non-blocking socket server on my project code here. (Look around line 360 for the start of the main loop in Run method).
An example of setting a socket into non-blocking state:
int SetNonBlocking(int sock)
{
int result = -1;
int flags = 0;
flags = ::fcntl(sock, F_GETFL, 0);
if (flags != -1)
{
flags |= O_NONBLOCK;
result = fcntl(sock , F_SETFL , flags);
}
return result;
}
I would use boost::asio::async_connector from boost::asio functionality to create multiple connection handlers (works both on single and multi-threaded environment). In the single threaded case, you just need to run from time to time boost::asio::io_service::run in order to make sure communications have time to be processed
The reason why you want to use asio is because its very good at handling asynchronous communication logic, so it won't block (as in your case) if a connection gets blocked. You can even arrange how much processing you want to devote to opening new connections, while keep serving existing ones
I'm writing a program using the Winsock API because a friend wanted a simple program to check and see if a Minecraft server was running or not. It works fine if it is running, however if it is not running, the program freezes until, I'm assuming, the connection times out. Another issue is, if I have something like this (pseudo-code):
void connectButtonClicked()
{
setLabel1Text("Connecting");
attemptConnection();
setLabel1Text("Done Connecting!");
}
it seems to skip right to attemptConnection(), completely ignoring whats above it. I notice this because the program will freeze, but it wont change the label to "Connecting".
Here is my actual connection code:
bool CConnectionManager::ConnectToIp(String^ ipaddr)
{
if(!m_bValid)
return false;
const char* ip = StringToPConstChar(ipaddr);
m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(isalpha(ip[0]))
{
ip = getIPFromAddress(ipaddr);
}
sockaddr_in service;
service.sin_family = AF_INET;
service.sin_addr.s_addr = inet_addr(ip);
service.sin_port = htons(MINECRAFT_PORT);
if(m_socket == NULL)
{
return false;
}
if (connect(m_socket, (SOCKADDR*)&service, sizeof(service)) == SOCKET_ERROR)
{
closesocket(m_socket);
return false;
}
else
{
closesocket(m_socket);
return true;
}
return true;
}
There is also code in the CConnectionManager's contructor to start up Winsock API and such.
So, how do I avoid this freeze, and allow me to update something like a progress bar during connection? Do I have to make the connection in a separate thread? I have only worked with threads in Java, so I have no idea how to do that :/
Also: I am using a CLR Windows Form Application
I am using Microsoft Visual C++ 2008 Express Edition
Your code does not skip the label update. The update simply involves issuing window messages that have not been processed yet, that is why you do not see the new text appear before connecting the socket. You will have to pump the message queue for new messages before connecting the socket.
As for the socket itself, there is no connect timeout in the WinSock API, unfortunately. You have two choices to implement a manual timeout:
1) Assuming you are using a blocking socket (sockets are blocking by default), perform the connect in a separate worker thread.
2) If you don't want to use a thread then switch the socket to non-blocking mode. Connecting the socket will always exit immediately, so your main code will not be blocked, then you will receive a notification later on if the connection was successful or not. There are several ways to detect that, depending on which API you use - WSAAsyncSelect(), WSAAsyncEvent(), or select().
Either way, while the connect is in progress, run a timer in your main thread. If the connect succeeds, stop the timer. If the timer elapses, disconnect the socket, which will cause the connect to abort with an error.
Maybe you want to read here:
To assure that all data is sent and received on a connected socket before it is closed, an application should use shutdown to close connection before calling closesocket. http://msdn.microsoft.com/en-us/library/ms740481%28v=VS.85%29.aspx
Since you are in the blocking mode there still might be some data...
Is there any reason why this shouldn't work?
[PseudoCode]
main() {
for (int i = 0; i < 10000; ++i) {
send(i, "abc", 3, 0);
}
}
I mean, to send "abc" through every number from 0 to 10000, aren't we passing in theory by a lot of different sockets? Most numbers between 0 and 10000 will not correspond to any socket, but some will. Is this correct?
edit: The desired goal is to have "abc" sent through every application that has an open socket.
That will never work. File descriptors are useful only within the same process (and its children).
You have to create a socket (this will get you a file descriptor you own and can use), connect it to an end point (which of course has to be open and listening) and only then you can send something through it.
For example:
struct sockaddr_in pin;
struct hostent *hp;
/* go find out about the desired host machine */
if ((hp = gethostbyname("foobar.com")) == 0) {
exit(1);
}
/* fill in the socket structure with host information */
memset(&pin, 0, sizeof(pin));
pin.sin_family = AF_INET;
pin.sin_addr.s_addr = ((struct in_addr *)(hp->h_addr))->s_addr;
pin.sin_port = htons(PORT);
/* grab an Internet domain socket: sd is the file descriptor */
if ((sd = socket(AF_INET, SOCK_STREAM, 0)) == -1) {
exit(1);
}
/* connect to PORT on HOST */
if (connect(sd,(struct sockaddr *) &pin, sizeof(pin)) == -1) {
exit(1);
}
/* send a message to the server PORT on machine HOST */
if (send(sd, argv[1], strlen(argv[1]), 0) == -1) {
exit(1);
}
The other side of the coin is to create a listening socket (what servers do) which will receive connections. The process is similar but the calls change, they are socket(), bind(), listen(), accept(). Still, you have to create a socket to get the file descriptor in your own process and know where would you want to listen or connect to.
This won't work. File descriptor 0 in your process won't give you access to file descriptor 0 in some other application's process.
To answer your followup questions: Socket IDs are local to each process. They behave a lot like file descriptors -- there are many processes running at once, and of course the operating system keeps track of which process has which files open. But within each process, file descriptors
0, 1, and 2 will refer to its own, private, stdin, stdout, and stderr streams respectively.
When a socket is created, the file descriptor it's assigned to is also only accessible from within that process.
So, based on your replies to other people...
You have program A running on your machine which has opened a socket connection to some other program B, which could be running anywhere. But neither of these programs are the one you're trying to write here. And so you want your program to be able to send data through program A's socket connection to program B.
If this is roughly what you're trying to do, then no, you probably cannot do this. At least not without dll injection to get into the process of program A.
Furthermore, even if you could find a way to send through program A's socket, you would have to know the exact details of the communication protocol that program A and B are using. If you don't, then you'll run the risk of sending data to program B that it doesn't expect, in which case it could terminate the connection, crash, or do any number of bad things depending on how it was written.
And if you are really trying to send a particular piece of data not just through a single program A but through every program on the computer with a socket connection open, then you are highly likely to encounter what I just described. Even if the data you want to send would work for one particular program, other programs are almost certainly using entirely different communication protocols and thus will most likely have problems handling your data.
Without knowing what you're really trying to achieve, I can't say whether your goal is just going to be complicated and time-consuming to accomplish or if it is simply a bad idea that you shouldn't ever be trying to do. But whatever it is, I would suggest trying to find a different and better way than trying to send data through another program's socket.
I am using winsock and C++ to set up a server application. The problem I'm having is that the call to listen results in a first chance exception. I guess normally these can be ignored (?) but I've found others having the same issue I am where it causes the application to hang every once in a while. Any help would be greatly appreciated.
The first chance exception is:
First-chance exception at 0x*12345678* in MyApp.exe: 0x000006D9: There are no more endpoints available from the endpoint mapper.
I've found some evidence that this could be cause by the socket And the code that I'm working with is as follows. The exception occurs on the call to listen in the fifth line from the bottom.
m_accept_fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (m_accept_fd == INVALID_SOCKET)
{
return false;
}
int optval = 1;
if (setsockopt (m_accept_fd, SOL_SOCKET, SO_REUSEADDR,
(char*)&optval, sizeof(optval)))
{
closesocket(m_accept_fd);
m_accept_fd = INVALID_SOCKET;
return false;
}
struct sockaddr_in local_addr;
local_addr.sin_family = AF_INET;
local_addr.sin_addr.s_addr = INADDR_ANY;
local_addr.sin_port = htons(m_port);
if (bind(m_accept_fd, (struct sockaddr *)&local_addr,
sizeof(struct sockaddr_in)) == SOCKET_ERROR)
{
closesocket(m_accept_fd);
return false;
}
if (listen (m_accept_fd, 5) == SOCKET_ERROR)
{
closesocket(m_accept_fd);
return false;
}
On a very busy server, you may be running out of Sockets. You may have to adjust some TCPIP parameters. Adjust these two in the registry:
HKLM\System\CurrentControlSet\Services\Tcpip\Parameters
MaxUserPort REG_DWORD 65534 (decimal)
TcpTimedWaitDelay REG_DWORD 60 (decimal)
By default, there's a few minutes delay between releasing a network port (socket) and when it can be reused. Also, depending on the OS version, there's only a few thousand in the range that windows will use. On the server, run this at a command prompt:
netstat -an
and look at the results (pipe to a file is easiest: netstat -an > netstat.txt). If you see a large number of ports from 1025->5000 in Timed Wait Delay status, then this is your problem and it's solved by adjusting up the max user port from 5000 to 65534 using the registry entry above. You can also adjust the delay by using the registry entry above to recycle the ports more quickly.
If this is not the problem, then the problem is likely the number of pending connections that you have set in your Listen() method.
The original problem has nothing to do with winsock. All the answers above are WRONG. Ignore the first-chance exception, it is not a problem with your application, just some internal error handling.
Are you actually seeing a problem, e.g., does the program end because of an unhandled exception?
The debugger may print the message even when there isn't a problem, for example, see here.
Uhh, maybe it's because you're limiting greatly the maximum number of incoming connections?
listen (m_accept_fd, 5)
// Limit here ^^^
If you allow a greater backlog, you should be able to handle your problem. Use something like SOMAXCONN instead of 5.
Also, if your problem is only on server startup, you might want to turn off LINGER (SO_LINGER) to prevent connections from hanging around and blocking the socket...
This won't answer your question directly, but since you're using C++, I would recommend using something like Boost::Asio to handle your socket code. This gives you a nice abstraction over the winsock API, and should allow you to more easily diagnose error conditions.