C++ sleep() breaks program - c++

I am trying to connect to a computer through a socket in c++. Basically what this code should do is try to connect, and if it cant connect, it should wait 3 seconds and try again.
while (true) {
if (connect(sock, (struct sockaddr *) &echoserver, sizeof(echoserver)) >= 0)
{
break;
}
cout << "Connection failed!";
sleep(3);
}
What the code does when its running is it will connect if it can, but if it can't, the cout never gets called and sleep never gets called either. When sleep is not there, the program works and continually tries to connect to the socket but there is no delay so it wouldn't connect anyway. I really need the delay to work.
Could anyone please help?

Once the connect fails, the socket refers to the failed connection. You can no longer use it to connect to anything. You need to close the existing socket and allocate a new one. This would have been much easier to diagnose if you had reported the error in the cout statement. (See docs for strerror and errno.)

Related

How to close tcp server socket correctly

can anyone explain me, what I am doing wrong with my tcp server termination?
In my program (single instance), I started another program which starts a tcp server. For the tcp server it is only allowed to listen to one connection.
After a connection was established between client and my server, a few messages are shared. As soon as the message protocoI has passed through, I want to terminate the server socket, reset my internal states and close my sub-programm.
After a few seconds, it will be possible to open my sub-programm again.
If so, I open the socket again... The same network device, the same ip address and the same port as before will be used...
My problem: my sub-programm crashes when running the 2nd time.
With netstat I analyzed my socket and found out, it stays in state LAST_ACK.
This can take more than 60 seconds (timeout?) till the socket is finally closed.
For closing the socket, I used the following code:
if (0 != shutdown(socketDescriptor_, SHUT_RDWR)) {
std::cout << "Read/write of socket deactivated" << std::endl;
}
if (0 == close(socketDescriptor_)) {
std::cout << "Socket is destroyed" << std::endl;
socketDescriptor_ = -1;
}
Any ideas? Thanks for your help!
Kind regards,
Matthias

Unix socket hangs on recv, until I place/remove a breakpoint anywhere

[TL;DR version: the code below hangs indefinitely on the second recv() call both in Release and Debug mode. In Debug, if I place or remove a breakpoint anywhere in the code, it makes the execution continue and everything behaves normally]
I'm coding a simple client-server communication using UNIX sockets. The server is in C++ while the client is in python. The connection (TCP socket on localhost) gets established no problem, but when it comes to receiving data on the server side, it hangs on the recv function. Here is the code where the problem happens:
bool server::readBody(int csock) // csock is the socket filedescriptor
{
int bytecount;
// protobuf-related variables
google::protobuf::uint32 siz;
kinMsg::request message;
// if the code is working, client will send false
// I initialize at true to be sure that the message is actually read
message.set_endconnection(true);
// First, read 4-characters header for extracting data size
char buffer_hdr[5];
if((bytecount = recv(csock, buffer_hdr, 4, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data "<< ::std::endl;
buffer_hdr[4] = '\0';
siz = atoi(buffer_hdr);
// Second, read the data. The code hangs here !!
char buffer [siz];
if((bytecount = recv(csock, (void *)buffer, siz, MSG_WAITALL))== -1)
::std::cerr << "Error receiving data " << errno << ::std::endl;
//Finally, process the protobuf message
google::protobuf::io::ArrayInputStream ais(buffer,siz);
google::protobuf::io::CodedInputStream coded_input(&ais);
google::protobuf::io::CodedInputStream::Limit msgLimit = coded_input.PushLimit(siz);
message.ParseFromCodedStream(&coded_input);
coded_input.PopLimit(msgLimit);
if (message.has_endconnection())
return !message.endconnection();
return false;
}
As can be seen in the code, the protocol is such that the client will first send the number of bytes in the message in a 4-character array, followed by the protobuf message itself. The first recv call works well and does not hang. Then, the code hangs on the second recv call, which should be recovering the body of the message.
Now, for the interesting part. When run in Release mode, the code hangs indefinitely and I have to kill either the client or the server. It does not matter whether I run it from my IDE (qtcreator), or from the CLI after a clean build (using cmake/g++).
When I run the code in Debug mode, it also hangs at the same recv() call. Then, if I place or remove a breakpoint ANYWHERE in the code (before or after that line of code), it starts again and works perfectly : the server receives the data, and reads the correct message.endconnection() value before returning out of the readBody function. The breakpoint that I have to place to trigger this behavior is not necessarily trigerred. Since the readBody() function is in a loop (my C++ server waits for requests from the python client), at the next iteration, the same behavior happens again, and I have to place or remove a breakpoint anywhere in the code, which is not necessarily triggered, in order to go past that recv() call. The loop looks like this:
bool connection = true;
// server waiting for client connection
if (!waitForConnection(connectionID)) std::cerr << "Error accepting connection" << ::std::endl;
// main loop
while(connection)
{
if((bytecount = recv(connectionID, buffer, 4, MSG_PEEK))== -1)
{
::std::cerr << "Error receiving data "<< ::std::endl;
}
else if (bytecount == 0)
break;
try
{
if(readBody(connectionID))
{
sendResponse(connectionID);
}
// if client is requesting disconnection, break the while(true)
else
{
std::cout << "Disconnection requested by client. Exiting ..." << std::endl;
connection = false;
}
}
catch(...)
{
std::cerr << "Erro receiving message from client" << std::endl;
}
}
Finally, as you can see, when the program returns from readBody(), it sends back another message to the client, which processes it and prints in the standard output (python code working, not shown because the question is already long enough). From this last behavior, I can conclude that the protocol and client code are OK. I tried to put sleep instructions at many points to see whether it was a timing problem, but it did not change anything.
I searched all over Google and SO for a similar problem, but did not find anything. Help would be much appreciated !
The solution is to not use any flags. Call recv with 0 for the flags or just use read instead of recv.
You are requesting the socket for data that is not there. The recv expects 10 bytes, but the client only sent 6. The MSG_WAITALL states clearly that the call should block until 10 bytes are available in the stream.
If you dont use any flags, the call will succeed with a bytecount at 6, which is the exact same effect than with MSG_DONTWAIT, without the potential side effects of non-blocking calls.
I did the test on the github project, it works.
The solution is to replace MSG_WAITALL by MSG_DONTWAIT in the recv() calls. It now works fine. To summarize, it makes the recv() calls non blocking, which makes the whole code work fine.
However, this still raises many questions, the first of which being: why was it working with this weird breakpoint changing thing ?
If the socket was blocking in the first place, one could assume that it is because there is no data on the socket. Let's assume both situations here :
There is no data on the socket, which is the reason why the blocking recv() call was not working. Changing it to a non blocking recv() call would then, in the same situation, trigger an error. If not, the protobuf deserialization would afterwards fail trying to deserialize from an empty buffer. But it does not ...
There is data on the socket. Then, why on earth would it block in the first place ?
Obviously there is something that I don't get about sockets in C, and I'd be very happy if somebody has an explanation for this behavior !

Winsock - Client disconnected, closesocket loop / maximum connections

I am learning Winsock and trying to create some easy programs to get to know it. I managed to create server which can handle multiple connections also manage them and client according to all tutorials, it is working how it was supposed to but :
I tried to make loop where I check if any of clients has disconnected and if it has, I wanted to close it.
I managed to write something which would check if socket is disconnected but it does not connect 2 or more sockets at one time
Anyone can give me reply how to make working loop checking through every client if it has disconnected and close socket ? It is all to make something like max clients connected to server at one time. Thanks in advance.
while (true)
{
ConnectingSocket = accept (ListeningSocket, (SOCKADDR*)&addr, &addrlen);
if (ConnectingSocket!=INVALID_SOCKET)
{
Connections[ConnectionsCounter] = ConnectingSocket;
char *Name = new char[64];
ZeroMemory (Name,64);
sprintf (Name, "%i",ConnectionsCounter);
send (Connections[ConnectionsCounter],Name,64,0);
cout<<"New connection !\n";
ConnectionsCounter++;
char data;
if (ConnectionsCounter>0)
{
for (int i=0;i<ConnectionsCounter;i++)
{
if (recv(Connections[i],&data,1, MSG_PEEK))
{
closesocket(Connections[i]);
cout<<"Connection closed.\n";
ConnectionsCounter=ConnectionsCounter-1;
}
}
}
}
Sleep(50);
}
it seems that you want to manage multiple connections using a single thread. right?
Briefly socket communication has two mode, block and non-block. The default one is block mode. let's focus your code:
for (int i=0;i<ConnectionsCounter;i++)
{
if (recv(Connections[i],&data,1, MSG_PEEK))
{
closesocket(Connections[i]);
cout<<"Connection closed.\n";
ConnectionsCounter=ConnectionsCounter-1;
}
}
In the above code, you called the recv function. and it will block until peer has sent msg to you, or peer closed the link. So, if you have two connection now namely Connections[0] and Connections[1]. If you were recv Connections[0], at the same time, the Connections[1] has disconnected, you were not know it. because you were blocking at recv(Connections[0]). when the Connections[0] sent msg to you or it closed the socket, then loop continue, finally you checked it disconnect, even through it disconnected 10 minutes ago.
To solve it, I think you need a book Network Programming for Microsoft Windows . There are some method, such as one thread one socket pattern, asynchronous communication mode, non-block mode, and so on.
Forgot to point out the bug, pay attention here:
closesocket(Connectons[i]);
cout<<"Connection closed.\n";
ConnectionsCounter=ConnectionsCounter-1;
Let me give an example to illustrate it. now we have two Connections with index 0 and 1, and then ConnectionsCount should be 2, right? When the Connections[0] is disconnected, the ConnectionsCounter is changed from 2 to 1. and loop exit, a new client connected, you save the new client socket as Connections[ConnectionsCounter(=1)] = ConnectingSocket; oops, gotting an bug. because the disconnected socket's index is 0, and index 1 was used by another link. you are reusing the index 1.
why not try to use vector to save the socket.
hope it helps~

Multithreading in C++, receive message from socket

I have studied Java for 8 months but decided to learn some c++ to on my spare time.
I'm currently making a multithreaded server in QT with minGW. My problem is that when a client connects, I create an instance of Client( which is a class) and pass the socket in the client class contructor.
And then I start a thread in the client object (startClient()) which is going to wait for messages, but it doesn't. Btw, startClient is a method that I create a thread from. See code below.
What happens then? Yes, when I try to send messages to the server, only errors, the server won't print out that a new client connects, and for some reason my computer starts working really hard. And qtcreator gets super slow until I close the server-program.
What I actually is trying to achieve is an object which derives the thread, but I have heard that it isn't a very good idea to do so in C++.
The listener loop in the server:
for (;;)
{
if ((sock_CONNECTION = accept(sock_LISTEN, (SOCKADDR*)&ADDRESS, &AddressSize)))
{
cout << "\nClient connected" << endl;
Client client(sock_CONNECTION); // new object and pass the socket
std::thread t1(&Client::startClient, client); //create thread of the method
t1.detach();
}
}
the Client class:
Client::Client(SOCKET socket)
{
this->socket = socket;
cout << "hello from clientconstructor ! " << endl;
}
void Client::startClient()
{
cout << "hello from clientmethod ! " << endl;
// WHEN I ADD THE CODE BELOW I DON'T GET ANY OUTPUT ON THE CONSOLE!
// No messages gets received either.
char RecvdData[100] = "";
int ret;
for(;;)
{
try
{
ret = recv(socket,RecvdData,sizeof(RecvdData),0);
cout << RecvdData << endl;
}
catch (int e)
{
cout << "Error sending message to client" << endl;
}
}
}
It looks like your Client object is going out of scope after you detach it.
if (/* ... */)
{
Client client(sock_CONNECTION);
std::thread t1(&Client::startClient, client);
t1.detach();
} // GOING OUT OF SCOPE HERE
You'll need to create a pointer of your client object and manage it, or define it at a higher level where it won't go out of scope.
The fact that you never see any output from the Server likely means that your client is unable to connect to your Server in the first place. Check that you are doing your IP addressing correctly in your connect calls. If that looks good, then maybe there is a firewall blocking the connection. Turn that off or open the necessary ports.
Your connecting client is likely getting an error from connect that it is interpreting as success and then trying to send lots of traffic on an invalid socket as fast as it can, which is why your machine seems to be working hard.
You definitely need to check the return values from accept, connect, read and write more carefully. Also, make sure that you aren't running your Server's accept socket in non-blocking mode. I don't think that you are because you aren't seeing any output, but if you did it would infinitely loop on error spawning tons of threads that would also infinitely loop on errors and likely bring your machine to its knees.
If I misunderstood what is happening and you do actually get a client connection and have "Client connected" and "hello from client method ! " output, then it is highly likely that your calls to recv() are failing and you are ignoring the failure. So, you are in a tight infinite loop that is repeatedly outputting "" as fast as possible.
You also probably want to change your catch block to catch (...) rather than int. I doubt either recv() or cout throw an int. Even so, that catch block won't be invoked when recv fails because recv doesn't throw any exceptions AFAIK. It returns its failure indicator through its return value.

How to control the connect timeout with the Winsock API?

I'm writing a program using the Winsock API because a friend wanted a simple program to check and see if a Minecraft server was running or not. It works fine if it is running, however if it is not running, the program freezes until, I'm assuming, the connection times out. Another issue is, if I have something like this (pseudo-code):
void connectButtonClicked()
{
setLabel1Text("Connecting");
attemptConnection();
setLabel1Text("Done Connecting!");
}
it seems to skip right to attemptConnection(), completely ignoring whats above it. I notice this because the program will freeze, but it wont change the label to "Connecting".
Here is my actual connection code:
bool CConnectionManager::ConnectToIp(String^ ipaddr)
{
if(!m_bValid)
return false;
const char* ip = StringToPConstChar(ipaddr);
m_socket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(isalpha(ip[0]))
{
ip = getIPFromAddress(ipaddr);
}
sockaddr_in service;
service.sin_family = AF_INET;
service.sin_addr.s_addr = inet_addr(ip);
service.sin_port = htons(MINECRAFT_PORT);
if(m_socket == NULL)
{
return false;
}
if (connect(m_socket, (SOCKADDR*)&service, sizeof(service)) == SOCKET_ERROR)
{
closesocket(m_socket);
return false;
}
else
{
closesocket(m_socket);
return true;
}
return true;
}
There is also code in the CConnectionManager's contructor to start up Winsock API and such.
So, how do I avoid this freeze, and allow me to update something like a progress bar during connection? Do I have to make the connection in a separate thread? I have only worked with threads in Java, so I have no idea how to do that :/
Also: I am using a CLR Windows Form Application
I am using Microsoft Visual C++ 2008 Express Edition
Your code does not skip the label update. The update simply involves issuing window messages that have not been processed yet, that is why you do not see the new text appear before connecting the socket. You will have to pump the message queue for new messages before connecting the socket.
As for the socket itself, there is no connect timeout in the WinSock API, unfortunately. You have two choices to implement a manual timeout:
1) Assuming you are using a blocking socket (sockets are blocking by default), perform the connect in a separate worker thread.
2) If you don't want to use a thread then switch the socket to non-blocking mode. Connecting the socket will always exit immediately, so your main code will not be blocked, then you will receive a notification later on if the connection was successful or not. There are several ways to detect that, depending on which API you use - WSAAsyncSelect(), WSAAsyncEvent(), or select().
Either way, while the connect is in progress, run a timer in your main thread. If the connect succeeds, stop the timer. If the timer elapses, disconnect the socket, which will cause the connect to abort with an error.
Maybe you want to read here:
To assure that all data is sent and received on a connected socket before it is closed, an application should use shutdown to close connection before calling closesocket. http://msdn.microsoft.com/en-us/library/ms740481%28v=VS.85%29.aspx
Since you are in the blocking mode there still might be some data...