The server I've written in c++ server works like proxy. Main function:
try
{
Connector c(ip); //establishes persistent connection to the server B
Listener1 l1(port); //listens incoming connection to the server A (our proxy) and push them into the queue
Listener2 l2(&c); //listens responses (also push messages) from the server B and push them into the queue
Proxy p(&c, &l1, &l2); //pulls clients out from the queue and forwards requests to the server B, pull out everything from the listener2 queue and returns as a responce
KeepAlive k(&l1, &p); //pushes the empty client to the listeners1's queue thus the proxy sends keepalive to the server B and the response is discarded
l1.start();
p.start();
l2.start();
k.start();
l1.join();
p.join();
l2.join();
k.join();
catch(std::string e)
{
std::cerr << "Error: " << e << std::endl;
return -1;
}
For now I have problems/doubts as follows:
**1.**I throw an exception from constructors, is it good practise? I throw an exception when it's not possible to establish the connection, that's why the object shouldn't be created I guess.
**2.**There is a problem with closing the application safety and clean up when the connection time-out occurs or the server B closes the connection and so on. listener1 and listener2 use blocking functions (system call accept() and BIO_read from openssl lib) so it's not possible to just set the loop condition from another thread. The problem is also the fact that all the modules are connected and share resources using mutexes. My current piece of code just calls exit function to terminate whole application.
I know this is not a perfect solution, I appreciate any advices and tips.
Thanks in advance
Constructors should throw exceptions if they fail. C++ is designed to handle that well. Base classes and members are cleaned up if and only if they're already constructed.
Blocking functions from other libraries are always a problem. Windows and POSIX handle it well: WSAWaitForMultipleObjectEx and select allow you to add an extra handle, which you can use to unblock the wait.
In your accept call, you might fake this by creating a connection from the main thread, via localhost. Detecting this "unusual" connection would be a signal to stop accepting further connections.
As for the openSSL read, I'd just close the socket from the main thread, threadsafety be damned. I would make sure that I'd do this quite late in the shutdown, and I wouldn't expect the library to be usable at all after that point.
Related
I'd like get an advice about how to replace boost::beast based ssl websocket connection between 2 different servers that will minimize the reconnection time.
my web socket client object is from the following type as a member variable of std::optional<websocketMgr> mgr; and it defined as follows :
class websocketMgr {
...
private:
boost::beast::websocket::stream<
boost::beast::ssl_stream<boost::beast::tcp_stream>> ws_;
}
during steady state, the web socket may send asnyc_write calls and is halting in read
ws_.async_read(buffer, yield);
From time to time, I'd like to change the connection to a different server peer. So I first need to trigger an exception on the async_read that is waiting by the current socket thread - this will eventually reach the socket d'tor and close the current connection before setting a new one.
void websocketMgr::closeWebsocket() {
ws_.close(boost::beast::websocket::close_code::normal);
...
mgr.emplace(...); // create new websocket, after the older one fully removed
However, as I experienced, the ssl_stream (underlying object) can take very long time to complete since it may reach timeout if the server side is currently using the connection and Is not cooperating in the shutdown process.
Perhaps anyone can suggest a way that will ensure that the older connection is closed gracefully on the background, while the new connection is created as fast as possible to reduce lack of connectivity.
Thanks !
I am writing simple synchronous asio server.
Workflow is following - in endless cycle accept connections and create thread for each connection. I know, this is not so optimal, but async is too hard for me.
Here's my ugly code:
std::vector<asio::io_service*> ioVec;
std::vector<std::thread*> thVec;
std::vector<CWorker> workerVec;
std::vector<tcp::acceptor*> accVec;
while (true) {
ioVec.emplace_back(new asio::io_service());
accVec.emplace_back(new tcp::acceptor(*ioVec.back(), tcp::endpoint(tcp::v4(), 3228)));
tcp::socket* socket = new tcp::socket(*ioVec.back());
accVec.back()->accept(*socket);
workerVec.push_back(CWorker());
thVec.emplace_back(new std::thread(&CWorker::run, &workerVec.back(), socket));
}
The problem is first connection being done, it's correctly accepted, thread is created, and everything is good. Breakpoint is correctly triggered on "accept()" string. But if I want to create second connection (it does not matter if first is DCed or not) -> telnet is connected, but breakpoint on next string to "accept" is not triggered, and connection is not responding to anything.
All this vector stuff - I've tried to debug somehow to create separate acceptor, io_service for any connection - not helped. Could anyone point me where is error?
P.S. Visual Studio 2013
The general pattern for an asio-based listener is:
// This only happens once!
create an asio_service
create a socket into which a new connection will be accepted
call asio_service->async_accept passing
the accept socket and
a handler (function object) [ see below]
start new threads (if desired. you can use the main thread if it
has nothing else to do)
Each thread should:
call asio_service->run [or any of the variations -- run_one, poll, etc]
Unless the main thread called asio_service->run() it ends up here
"immediately" It should do something to pass the time (like read
from the console or...) If it doesn't have anything to do, it probably
should have called run() to make itself available in the asio's thread pool.
In the handler function:
Do something with the socket that is now connected.
create a new socket for the next accept
call asio_service->async_accept passing
the new accept socket and
the same handler.
Notice in particular that each accept call only accepts one connection, and you should not have more than one accept at a time listening on the same port, so you need to call async_accept again in the handler from the previous call.
Boost ASIO has some very good tutorial examples like this one
In a larger server app I have one thread with a basic OpenSSL server using BIO in blocking mode because that seemed the simplest way. My code accepts a single type of request from a phone (Android or iOS, and I'm not writing that code) and returns a hex string wrapped in basic HTML (describing part of my server state). I've gone with SSL and a psuedo-HTTPS server because that makes things easier for the phone developer. If there's anything in the request that the server doesn't understand I return a 404. This all works.
The problem : When my server shuts down this thread doesn't exit because of the blocking BIO_do_accept call.
I have tried BIO_get_fd() and setsockopt() to put a timeout on the underlying socket but it still blocks. Somewhat worryingly SSL_state() stays at "before/accept initialization", but looping on that obviously won't work.
I assume other people have server code like this, and those servers can shut down gracefully. How do they do that? Is there some way for another thread to break that block and get the accept call to return with an error? Or do I have to drop the idea of blocking calls and grind through the apparently awful non-blocking version?
When my server shuts down this thread doesn't exit because of the blocking BIO_do_accept call.
To stop the blocking, close the associated socket. It will return immediately.
Perform the shutdown from your signal handler.
Don't do anything else in the signal handler with respect to OpenSSL because it is not async-signal safe. Let the main thread cleanup once your worker thread has returned. See, for example, libcrypto Thread Safety.
I am using the boost library to create a server application. At a time one client is allowed therefore if the async_accept(...) function gets called the acceptor will be closed.
The only job of my server is to send data periodically (if sending is enabled on the server, otherwise "just sit there" until it gets enabled) to the client. Therefore I have a boost message queue - if a message arrives the send() is called on the socket.
My problem is that I cannot tell if the client is still listening. Normally you would not care, by the next transmission the send would yield an error.
But in my case the acceptor is not opened when a socket is opened. If the socket gets in the CLOSE_WAIT state I have to close it and open the acceptor again so that the client can connect again.
Waiting until the next send is also no option since it is possible that the sending is disabled therefore my server would be stuck.
Question:
How can I determine if a boost::asio::ip::tcp::socket is in a CLOSE_WAIT state?
Here is the code to do what Dmitry Poroh suggests:
typedef asio::detail::socket_option::integer<ASIO_OS_DEF(SOL_SOCKET),SO_ERROR>so_error;
so_error tmp;
your_socket.get_option(tmp);
int value=tmp.value();
//do something with value.
You can try to use ip::tcp::socket::get_option and get error state with level SOL_SOCKET and option name SO_ERROR. I'm surprised that I have not found the ready boost implementation for it. So you can try to meet GettableSocketOption requirements an use ip::tcp::socket::get_option to fetch the socket error state.
Hey I'm using the WSAEventSelect for event notifications of sockets. So far everything is cool and working like a charm, but there is one problem.
The client is a .NET application and the server is written in Winsock C++. In the .NET application I'm using System.Net.Sockets.Socket class for TCP/IP. When I call the Socket.Shutdown() and Socket.Close() method, I receive the FD_CLOSE event in the server, which I'm pretty sure is fine. Okay the problem occurs when I check the iErrorCode of WSANETWORKEVENTS which I passed to WSAEnumNetworkEvents. I check it like this
if (listenerNetworkEvents.lNetworkEvents & FD_CLOSE)
{
if (listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT] != 0)
{
// it comes here
// which means there is an error
// and the ERROR I got is
// WSAECONNABORTED
printf("FD_CLOSE failed with error %d\n",
listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT]);
break;
}
closesocket(socketArray[Index]);
}
But it fails with the WSAECONNABORTED error. Why is that so?
EDIT: Btw, I'm running both the client and server on the same computer, is it because of that? And I received the FD_CLOSE event when I do this:
server.Shutdown(SocketShutdown.Both); // in .NET C#, client code
I'm guessing you're calling Shutdown() and then Close() immediately afterward. That will give the symptom you're seeing, because this is "slamming the connection shut". Shutdown() does initiate a graceful disconnect (TCP FIN), but immediately following it with Close() aborts that, sending a TCP RST packet to the remote peer. Your Shutdown(SocketShutdown.Both) call slams the connection shut, too, by the way.
The correct pattern is:
Call Shutdown() with the direction parameter set to "write", meaning we won't be sending any more data to the remote peer. This causes the stack to send the TCP FIN packet.
Go back to waiting for Winsock events. When the remote peer is also done writing, it will call Shutdown("write"), too, causing its stack to send your machine a TCP FIN packet, and for your application to get an FD_CLOSE event. While waiting, your code should be prepared to continue reading from the socket, because the remote peer might still be sending data.
(Please excuse the pseudo-C# above. I don't speak .NET, only C++.)
Both peers are expected to use this same shutdown pattern: each tells the other when it's done writing, and then waits to receive notification that the remote peer is done writing before it closes its socket.
The important thing to realize is that TCP is a bidirectional protocol: each side can send and receive independently of the other. Closing the socket to reading is not a nice thing to do. It's like having a conversation with another person but only talking and being unwilling to listen. The graceful shutdown protocol says, "I'm done talking now. I'm going to wait until you stop talking before I walk away."