Implementing a disconnecting feature - c++

So, let's say I have a client, and to respond to server messages, the client must have a function that listens for them, like in my code:
int Client::loop(void *data)
{
Client *instance = (Client*)data;
for (;;)
{
boost::array<unsigned char, PACKET_LENGTH> buf;
boost::system::error_code error;
// Read any incoming package
size_t len = instance->socket.read_some(boost::asio::buffer(buf), error);
if (error == boost::asio::error::eof)
{
// Connection closed, return
return 0;
}
DataHeader header = static_cast<DataHeader>(buf[0]);
switch (header) // Let's see which type of packet the server is sending.
{
case GTREG: // Server is sending a GTREG response.
instance->getRegionResponse(buf);
break;
case PLOBJ: // Server is sending a PLOBJ response.
instance->placeObjResponse(buf);
break;
case MOVPL: // WIP.
break;
case SYOBJ: // Server is sending an object that other player placed.
instance->syncObj(buf);
break;
default:
break;
}
}
}
This function is made a thread by SDL, so that the main process of my program can do work without having to listen to the server. Now, at some point in time, I'll want to close the program, and to do so, I have to disconnect the listening socket.
This "closing function" is called by the main process of my program, so it somehow needs to tell the client thread to shutdown before closing.
Now, how do I do that? I've tried a function like this:
void Client::disconnect()
{
boost::system::error_code error;
socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, error);
socket.close();
}
However, using it crashes the application with an error.
Is there something I'm missing? Thanks in advance for any help!

You should shutdown the socket, then wait for the thread to terminate before closing the socket.

Related

Recv() returning SOCKET_ERROR when I connect a client to the server instead of blocking and waiting for message

I am relatively new to network programming and multithreading in C++. Currently my recv() call returns an unknown error. I'm not quite sure where the error coming from at the moment and would appreciate some help.
I used putty to connect to the server locally
class Threaded_TCPListener{
int Threaded_TCPListener::Init()
{
// Initializing WinSock
WSADATA wsData;
WORD ver = MAKEWORD(2,2);
int winSock = WSAStartup(ver, &wsData);
if(winSock != 0)
return winSock;
// Creating listening socket
this->socket = ::socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(this->socket == INVALID_SOCKET)
return WSAGetLastError();
// Fill sockaddr with ip addr and port
sockaddr_in hint;
hint.sin_family = AF_INET;
hint.sin_port = htons(this->port);
inet_pton(AF_INET, this->ipAddress, &hint.sin_addr);
// Bind hint to socket
if(bind(this->socket, (sockaddr*)&hint, sizeof(hint)) == SOCKET_ERROR)
return WSAGetLastError();
// Start listening on socket
if(listen(this->socket, SOMAXCONN) == SOCKET_ERROR)
return WSAGetLastError();
// Accept first client
this->createAcceptThread();
return 0;
}
int Threaded_TCPListener::Run()
{
bool isRunning = true;
// Read from all clients
std::vector<std::thread> threads;
threads.reserve(this->clients.size());
// Recv from client sockets
for (int i=0; i < this->clients.size(); ++i)
{
threads.emplace_back(std::thread(&Threaded_TCPListener::receiveFromSocket, this, socket));
}
// Wait for all threads to finish
for(std::thread& t : threads)
{
t.detach();
}
return 0;
}
void Threaded_TCPListener::onMessageReceived(int clientSocket, const char* msg, int length)
{
Threaded_TCPListener::broadcastToClients(clientSocket, msg, length);
std::thread t(&Threaded_TCPListener::receiveFromSocket, this, clientSocket);
t.detach();
return;
}
void Threaded_TCPListener::sendMessageToClient(int clientSocket, const char * msg, int length)
{
send(clientSocket, msg, length, 0);
return;
}
void Threaded_TCPListener::broadcastToClients(int senderSocket, const char * msg, int length)
{
std::vector<std::thread> threads;
threads.reserve(clients.size());
// Iterate over all clients
for (int sendSock : this->clients)
{
if(sendSock != senderSocket)
threads.emplace_back(std::thread(&Threaded_TCPListener::sendMessageToClient, this,sendSock, msg, length));
}
// Wait for all threads to finish
for(std::thread& t : threads)
t.join();
return;
}
void Threaded_TCPListener::createAcceptThread()
{
// Start accepting clients on a new thread
this->listeningThread = std::thread(&Threaded_TCPListener::acceptClient, this);
this->listeningThread.detach();
return;
}
void Threaded_TCPListener::acceptClient()
{
int client = accept(this->socket, nullptr, nullptr);
// Error
if(client == INVALID_SOCKET)
{
std::printf("Accept Err: %d\n", WSAGetLastError());
}
// Add client to clients queue
else
{
// Add client to queue
this->clients.emplace(client);
// Client Connect Confirmation
onClientConnected(client); // Prints msg on server
// Create another thread to accept more clients
this->createAcceptThread();
}
return;
}
void Threaded_TCPListener::receiveFromSocket(int receivingSocket)
{
// Byte storage
char buff[MAX_BUFF_SIZE];
// Clear buff
memset(buff, 0, sizeof(buff));
// Receive msg
int bytesRecvd = recv(receivingSocket, buff, MAX_BUFF_SIZE, 0);
if(bytesRecvd <= 0)
{
char err_buff[1024];
strerror_s(err_buff, bytesRecvd);
std::cerr << err_buff;
// Close client
this->clients.erase(receivingSocket);
closesocket(receivingSocket);
onClientDisconnected(receivingSocket); // Prints msg on server
}
else
{
onMessageReceived(receivingSocket, buff, bytesRecvd);
}
}
}
I am trying to create a multithreaded TCP 'server' that will handle incoming clients by having an accept thread continuously running (listening for new connections), and a thread waiting with a recv block for each client connected to the server.
Your Init looks fine:
create socket, bind it, listen on it, start accept thread
In your accept thread's acceptClient looks sort of OK:
print some message
add the client socket to clients queue
create a new accept thread
Your Run makes no sense:
create one thread per element in clients to receive from the listening socket
It looks like you are spawning a new thread for every single socket action. That is a pretty wasteful design. As soon as the thread is done it can go back to doing something else.
So creating a new accept thread in acceptClient is a waste, you could just loop back to the beginning to ::accept the next client. Like so:
acceptClient() {
while (alive) {
int client = accept(socket, ...);
createClientHandler(client);
}
}
What seems to be missing is spawning a new client thread to service the client socket. You currently do this in Run, but that's before any of the clients are actually accepted. And you do it for the wrong socket! Instead, you should be spawning the receiveFromSocket threads in acceptClient, and passing it the client socket. So that's a bug.
In your receiveFromSocket you also need not create another thread to receiveFromSocket again -- just loop back to the beginning.
The biggest concern with this thread-per-action design is that you are spawning sender threads on every incoming message. This means you could actually have several sender threads attempting to ::send on the same TCP socket. That's not very safe.
The order of calls made to WSASend is also the order in which the buffers are transmitted to the transport layer. WSASend should not be called on the same stream-oriented socket concurrently from different threads, because some Winsock providers may split a large send request into multiple transmissions, and this may lead to unintended data interleaving from multiple concurrent send requests on the same stream-oriented socket.
https://learn.microsoft.com/en-us/windows/desktop/api/winsock2/nf-winsock2-wsasend
Similarly, instead of spawning threads in broadcastToClients, I suggest you just spawn one persistent sender thread per client socket in acceptClient (together with the receiveFromSocket thread within some createClientHandler).
To communicate with the sender threads you should use thread-safe blocking queues. Each sender thread would look like this:
while (alive) {
msg = queue.next_message();
send(client_socket, msg);
}
Then on message received you just do:
for (client : clients) {
client.queue.put_message(msg);
}
So to summarize, to handle each client you need a structure like this:
struct Client {
int client_socket;
BlockingQueue queue;
// optionally if you want to keep track of your threads
// to be able to safely clean up
std::thread recv_thread, send_thread;
};
Safe cleanup is a whole other story.
Finally, a remark on this comment in your code:
// Wait for all threads to finish
for(std::thread& t : threads)
{
t.detach();
}
That's almost the opposite to what std::thread::detach does:
https://en.cppreference.com/w/cpp/thread/thread/detach
It allows you to destroy the thread object without having to wait for the thread to finish execution.
There is a misconception in the code in how a TCP server has to be implemented:
You seem to assume that you can have a single server socket file descriptor which can handle all communication. This is not the case. You must have a single dedicated socket file descriptor which is just used for listening and accepting incoming connections, and then you have one additional file descriptor for each existing connection.
In your code I see that you invoke receiveFromSocket() always with the listening socket. This is wrong. Also invoking receiveFromSocket() in a loop for all clients is wrong.
What you rather need to do is:
- Have one dedicated thread which call accept() in a loop. There is no performance benefit in calling accept() from multiple threads.
- One accept() returns a new connection you spawn a new thread which calls recv() in a loop. This will then block and wait for new data as you expect in your question.
You also need to drop the habit of calling individual functions from new threads. This is not multithreaded programming. A thread usually contains a loop. Everything else is usually a design flaw.
Also note that multithreaded programming is still rocket science in 2019, especially in C++. If you are not an absolute expert you will not be able to do it. Also note that absolute experts in multithreaded programming will try to avoid multithreaded programming whenever possible. A lot seemingly concurrent tasks which are I/O bound can better be handled by a single threaded event based system.

Non blocking WebSocket Server with POCO C++

I'm trying to build a WebSocket server with POCO.
My Server should send data to the client and all the time within a time intervall. And when the client sends some data, the sever should manipulate the data it send to the client.
My handleRequest method within my WebSocketRequestHandler:
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
WebSocket ws(request, response);
char buffer[1024];
int flags = 0;
int n = 0;
do {
// recieving message
n = ws.receiveFrame(buffer, sizeof(buffer), flags);
// ... do stuff with the message
// sending message
char* msg = (char *) "Message from server"; // actually manipulated, when message recieved
n = sizeof(msg);
ws.sendFrame(msg, strlen(msg), WebSocket::FRAME_TEXT);
sleep(1); // time intervall sending messages
} while (n > 0 || (flags & WebSocket::FRAME_OP_BITMASK) != WebSocket::FRAME_OP_CLOSE);
}
The problem is, that the method get stucked in we.recieveFrame() until it gets a frame.
So how can i solve this, that receiveFrame() is not blocking the loop.
Is the a better way to solve this complete problem?
Thanks.
You should set a receive timeout.
ws.setReceiveTimeout(timeout);
So, you will get a Poco::TimeoutException each timeout microseconds and you can do all you need, included send data by that websocket.
ws.setReceiveTimeout(1000000);//a second
do{
try{
int n = ws.receiveFrame(buffer, sizeof(buffer), flags);
//your code to manipulate the buffer
}
catch(Poco::TimeoutException&){
....
}
//your code to execute each second and/or after receive a frame
} while (condition);
Use a std::thread or pthread and call the blocking function in the thread's function

boost socket comms are not working past one exchange

I am converting an app which had a very simple heartbeat / status monitoring connection between two services. As that now needs to be made to run on linux in addition to windows, I thought I'd use boost (v1.51, and I cannot upgrade - linux compilers are too old and windows compiler is visual studio 2005) to accomplish the task of making it platform agnostic (considering, I really would prefer not to either have two code files, one for each OS, or a littering of #defines throughout the code, when boost offers the possibility of being pleasant to read (6mos after I've checked in and forgotten this code!)
My problem now, is the connection is timing out. Actually, it's not really working at all.
First time through, the 'status' message is sent, it's received by the server end which sends back an appropriate response. Server end then goes back to waiting on the socket for another message. Client end (this code), sends the 'status' message again... but this time, the server never receives it and the read_some() call blocks until the socket times out. I find it really strange that
The server end has not changed. The only thing that's changed, is my having altered the client code from basic winsock2 sockets, to this code. Previously, it connected and just looped through send / recv calls until the program was aborted or the 'lockdown' message was received.
Why would subsequent calls (to send) silently fail to send anything on the socket and, what do I need to adjust in order to restore the simple send / recv flow?
#include <boost/signals2/signal.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
using namespace std;
boost::system::error_code ServiceMonitorThread::ConnectToPeer(
tcp::socket &socket,
tcp::resolver::iterator endpoint_iterator)
{
boost::system::error_code error;
int tries = 0;
for (; tries < maxTriesBeforeAbort; tries++)
{
boost::asio::connect(socket, endpoint_iterator, error);
if (!error)
{
break;
}
else if (error != make_error_code(boost::system::errc::success))
{
// Error connecting to service... may not be running?
cerr << error.message() << endl;
boost::this_thread::sleep_for(boost::chrono::milliseconds(200));
}
}
if (tries == maxTriesBeforeAbort)
{
error = make_error_code(boost::system::errc::host_unreachable);
}
return error;
}
// Main thread-loop routine.
void ServiceMonitorThread::run()
{
boost::system::error_code error;
tcp::resolver resolver(io_service);
tcp::resolver::query query(hostnameOrAddress, to_string(port));
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket(io_service);
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == boost::system::errc::host_unreachable)
{
TerminateProgram();
}
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
boost::array<char, 10> response;
int retry = 0;
while (retry < maxTriesBeforeAbort)
{
// A 1s request interval is more than sufficient for status checking.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
// Send the command to the network monitor server service.
boost::asio::write(socket, command, error);
if (error)
{
// Error sending to socket
cerr << error.message() << endl;
retry++;
continue;
}
// Clear the response buffer, then read the network monitor status.
response.assign(0);
/* size_t bytes_read = */ socket.read_some(boost::asio::buffer(response), error);
if (error)
{
if (error == make_error_code(boost::asio::error::eof))
{
// Connection was dropped, re-connect to the service.
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == make_error_code(boost::system::errc::host_unreachable))
{
TerminateProgram();
}
continue;
}
else
{
cerr << error.message() << endl;
retry++;
continue;
}
}
// Examine the response message.
if (strncmp(response.data(), "normal", 6) != 0)
{
retry++;
// If we received the lockdown response, then terminate.
if (strncmp(response.data(), "lockdown", 8) == 0)
{
break;
}
// Not an expected response, potential error, retry to see if it was merely an aberration.
continue;
}
// If we arrived here, the exchange was successful; reset the retry count.
if (retry > 0)
{
retry = 0;
}
}
// If retry count was incremented, then we have likely encountered an issue; shut things down.
if (retry != 0)
{
TerminateProgram();
}
}
When a streambuf is provided directly to an I/O operation as the buffer, then the I/O operation will manage the input sequence appropriately by either commiting read data or consuming written data. Hence, in the following code, command is empty after the first iteration:
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
// `command`'s input sequence contains "status\n".
while (retry < maxTriesBeforeAbort)
{
...
// write all of `command`'s input sequence to the socket.
boost::asio::write(socket, command, error);
// `command.size()` is 0, as the write operation will consume the data.
// Subsequent write operations with `command` will be no-ops.
...
}
One solution would be to use std::string as the buffer:
std::string command("status\n");
while (retry < maxTriesBeforeAbort)
{
...
boost::asio::write(socket, boost::asio::buffer(command), error);
...
}
For more details on streambuf usage, consider reading this answer.

TCP IP Communication using C++

Im trying to make a TCP/IP client using boost::asio in C++. I have created function that creates a socket to connect to the server
void TCP_IP_Communication::Create_Socket()
{
...
_socket = new tcp::socket(_io); //create socket
_io.run();
boost::system::error_code error= boost::asio::error::host_not_found;
try
{
while (error && endpoint_iterator != end) //if error go to next endpoint
{
_socket->close();
_socket->connect(*endpoint_iterator++, error);
}
//if error throw error
if(error)
throw boost::system::system_error(error);
//else the router is connected
boost::asio::read_until(*_socket,buf,'\n');
}}
Then I use another function/thread to send a command and receive response.
try
{
if(p=='i')
_socket->send(boost::asio::buffer(" sspi l1\n\n")); //sending signal presence for input command
else
_socket->send(boost::asio::buffer(" sspo l1\n\n")); //sending signal presence for output command
for(; ;) //loop reading all values from router
{
//wait for reply??
boost::asio::read_until(*_socket,buf,'\n');
std::istream is(&buf);
std::getline(is,this->data);
std::cout<<std::endl<<this->data;
The problem is, each time I connect to the server it gives a response
? "login"
But I need to suppress it when I send a command. Actually it shouldn't show up as I'm not connecting to the server each time I send a command,but it does. What did I do wrong here? I just cant figure it out.
The main function is like this:
int main()
{
TCP_IP_Connection router;
router.Create_Socket();
boost::thread router_thread1,router_thread2;
router_thread1=boost::thread(&TCP_IP_Connection::get_status,&router,'i');
router_thread1.join();
std::string reply="\nend of main()";
std::cout<<reply;
return 0;
}

Socket can't accept connections when non-blocking?

EDIT: Messed up my pseudo-coding of the accept call, it now reflects what I'm actually doing.
I've got two sockets going. I'm trying to use send/recv between the two. When the listening socket is blocking, it can see the connection and receive it. When it's nonblocking, I put a busy wait in (just to debug this) and it times out, always with the error EWOULDBLOCK. Why would the listening socket not be able to see a connection that it could see when blocking?
The code is mostly separated in functions, but here's some pseudo-code of what I'm doing.
int listener = -2;
int connector = -2;
int acceptedSocket = -2;
getaddrinfo(port 27015, AI_PASSIVE) results loop for listener socket
{
if (listener socket() == 0)
{
if (listener bind() == 0)
if (listener listen() == 0)
break;
listener close(); //if unsuccessful
}
}
SetBlocking(listener, false);
getaddrinfo("localhost", port 27015) results loop for connector socket
{
if (connector socket() == 0)
{
if (connector connect() == 0)
break; //if connect successful
connector close(); //if unsuccessful
}
}
loop for 1 second
{
acceptedSocket = listener accept();
if (acceptedSocket > 0)
break; //if successful
}
This just outputs a huge list errno of EWOULDBLOCK before ultimately ending the timeout loop. If I output the file descriptor for the accepted socket in each loop interation, it is never assigned a file descriptor.
The code for SetBlocking is as so:
int SetBlocking(int sockfd, bool blocking)
{
int nonblock = !blocking;
return ioctl(sockfd,
FIONBIO,
reinterpret_cast<int>(&nonblock));
}
If I use a blocking socket, either by calling SetBlocking(listener, true) or removing the SetBlocking() call altogether, the connection works no problem.
Also, note that this connection with the same implementation works in Windows, Linux, and Solaris.
Because of the tight loop you are not letting the OS complete your request. That's the difference between VxWorks and others - you basically preempt your kernel.
Use select(2) or poll(2) to wait for the connection instead.