I am building a server that uses websockets.
Currently every connected client uses two goroutines. One for reading and one for writing.
The writing goroutine basically listens to a channel for messages it should send and then tries to deliver them.
type User struct{
send chan []byte
...
}
func (u *User) Send(msg []byte){
u.send <- msg
}
Problem is, that a read from client A may cause a write to client B.
Suppose the connection to B has some problems however (e.g. very slow) and it's send channel is already full. Current behaviour is, that trying to add a message to the channel now starts blocking, until something is removed from the channel.
This means, that now A waits until B's buffer is no longer full.
I want to solve it somewhat like this:
func (u *User) Send(msg []byte) err{
u.send, err <- msg
if err != nil{
//The channels buffer is full.
//Writing currently not possible.
//Needs appropriate error handling.
return err
}
return nil
}
Basically instead of blocking I want error handling in case the buffer is full.
How do I achieve that best?
As ThunderCat pointed out in his comment the solution is
func (u *User) Send(msg []byte){
select{
case u.send <- msg:
default: //Error handling here
}
}
Related
I'm a bit confused by expected proper usage pattern of APIs such as SSL_connect(), SSL_write(), etc. I've read up on some other posts on SO and elsewhere, and those that I found are all centered around blocking or non-blocking sockets (i.e. where BIO is given a socket to use for underlying IO), and the return errors SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE from calls in such configurations are pretty clear how to handle.
However, I'm a bit puzzled as to what the proper handling would be when BIO is set up without underlying IO socket, and instead all IO is handled via memory buffers. (The reason for such a setup is because encrypted data stream is not immediately sent over a vanila socket, but rather may be enveloped over other protocols or delivery mechanisms, and cannot be written to some socket directly). E.g. the BIO is set up as
auto readBio = BIO_new(BIO_s_mem());
auto writeBio = BIO_new(BIO_s_mem());
auto ssl = SSL_new(...);
SSL_set_bio(ssl, readBio, writeBio);
My assumption - albeit it appears to be incorrect - that after making a call to say SSL_connect(), it would tell me when it's time to pick up its output from write buffer using BIO_read() call and deliver that buffer (by whatever custom underlying transport means) to the other end peer; and likewise when to feed it data from peer. In other words, something akin to:
while (true) {
auto ret = SSL_connect(ssl); // or SSL_read(), SSL_write(), SSL_shutdown() in other contexts...
if (ret <= 0) {
auto err = SSL_get_error(ssl, ret);
switch(err) {
case SSL_ERROR_WANT_READ:
auto buf = magicallyReadDataFromPeer();
BIO_write(buf, ...);
continue;
case SSL_ERROR_WANT_WRITE:
Buffer buf;
BIO_read(buf, ...);
magicallySendDataToPeer();
continue;
}
} else break;
}
But I'm noticing that the first call to SSL_connect() always results in SSL_EROR_WANT_READ with nothing sent to peer to actually initiate TLS handshake, and so it blocks indefinitely.
If after calling SSL_connect() I do flush the buffer by doing BIO_read() and sending it out, then things seem to proceed. Same seems for SSL_write() calls, but then it seems that if I always flush buffer after call, and then check for SSL_ERROR_WANT_WRITE, I'd be flushing the buffer twice (with second one probably being a no-op) and that seems nonsensical. It also seems strange that I should just always ignore SSL_ERROR_WANT_WRITE of every SSL_connect/accept/write/read/shutdown calls since I'd be flushing always after each call.
And so I'm puzzled about what's the proper and expected dance between SSL_connect/etc and BIO_read/write calls and their tying relationship of SSL_ERROR_WANT_* values, specifically when using mem buffer instead of socket or file descriptor for underlying IO.
I was following a tutorial on youtube on building a chat program using winsock and c++. Unfortunately the tutorial never bothered to consider race conditions, and this causes many problems.
The tutorial had us open a new thread every time a new client connected to the chat server, which would handle receiving and processing data from that individual client.
void Server::ClientHandlerThread(int ID) //ID = the index in the SOCKET Connections array
{
Packet PacketType;
while (true)
{
if (!serverptr->GetPacketType(ID, PacketType)) //Get packet type
break; //If there is an issue getting the packet type, exit this loop
if (!serverptr->ProcessPacket(ID, PacketType)) //Process packet (packet type)
break; //If there is an issue processing the packet, exit this loop
}
std::cout << "Lost connection to client ID: " << ID << std::endl;
}
When the client sends a message, the thread will process it and send it by first sending packet type, then sending the size of the message/packet, and finally sending the message.
bool Server::SendString(int ID, std::string & _string)
{
if (!SendPacketType(ID, P_ChatMessage))
return false;
int bufferlength = _string.size();
if (!SendInt(ID, bufferlength))
return false;
int RetnCheck = send(Connections[ID], _string.c_str(), bufferlength, NULL); //Send string buffer
if (RetnCheck == SOCKET_ERROR)
return false;
return true;
}
The issue arises when two threads (Two separate clients) are synchronously trying to send a message at the same time to the same ID. (The same third client). One thread may send to the client the int packet type, so the client is now prepared to receive an int, but then the second thread sends a string. (Because the thread assumes the client is waiting for that). The client is unable to process correctly and results in the program being unusable.
How would I solve this issue?
One solution I had:
Rather than allow each thread to execute server commands on their own, they would set an input value. The main server thread would loop through all the input values from each thread and then execute the commands one by one.
However I am unsure this won't have problems of its own... If a client sends multiple messages in the time frame of a single server loop, only one of the messages will send (since the new message would over-write the previous message). Of course there are ways around this, such as arrays of input or faster loops, but it still poses a problem.
Another issue that I thought of was that a client with a lower ID would always end up having their message sent first each loop. This isn't that big of a deal but if there was a situation, say, a trivia game, where two clients entered the correct answer in the same loop then the client with the lower ID would end up saying the answer "first" every time.
Thanks in advance.
If all I/O is being handled through a central server, a simple (but certainly not elegant) solution is to create a barrier around the I/O mechanisms to each client. In the simplest case this can just be a mutex. Associate that barrier with each client and anytime someone wants to send that client something (a complete message), lock the barrier. Unlock it when the complete message is handled. That way only one client can actually send something to another client at a time. In C++11, see std::mutex.
I asked this in a previous question, but some people felt that my original question was not detailed enough ("Why would you ever want a timed condition wait??") so here is a more specific one.
I have a goroutine running, call it server. It is already started, will execute for some amount of time, and do its thing. Then, it will exit since it is done.
During its execution some large number of other goroutines start. Call them "client" threads if you like. They run step A, and step B. Then, they must wait for the "server" goroutine to finish for a specified amount of time, and exit with status if "server is not finished, and say run step C if it finishes.
(Please do not tell me how to restructure this workflow. It is hypothetical and a given. It cannot be changed.)
A normal, sane way to do this is to have the server thread signal a condition variable with a selectAll or Broadcast function, and have the other threads in a timed wait state monitoring the condition variable.
func (s *Server) Join(timeMillis int) error {
s.mux.Lock()
defer s.mux.Unlock()
while !s.isFinished {
err = s.cond.Wait(timeMillis)
if err != nil {
stepC()
}
}
return err
}
Where the server will enter a state where isFinished becomes true and broadcast signal the condition variable atomically with respect to the mutex. Except this is impoosible, since Go does not support timed condition waits. (But there is a Broadcast())
So, what is the "Go-centric" way to do this? I've reall all of the Go blogs and documentation, and this pattern or its equivalent, despite its obviousness, never comes up, nor any equivalent "reframing" of the basic problem - which is that IPC style channels are between one routine and one other routine. Yes, there is fan-in/fan-out, but remember these threads are constantly appearing and vanishing. This should be simple - and crucially /not leave thousands of "wait-state" goroutines hanging around waiting for the server to die when the other "branch" of the mux channel (the timer) has signalled/.
Note that some of the "client" above might be started before the server goroutine has started (this is when channel is usually created), some might appear during, and some might appear after... in all cases they should run stepC if and only if the server has run and exited after timeMillis milliseconds post entering the Join() function...
In general the channels facility seems sorely lacking when there's more than one consumer. "First build a registry of channels to which listeners are mapped" and "there's this really nifty recursive data structure which sends itself over a channel it holds as field" are so.not.ok as replacements to the nice, reliable, friendly, obvious: wait(forSomeTime)
I think what you want can be done by selecting on a single shared channel, and then having the server close it when it's done.
Say we create a global "Exit channel", that's shared across all goroutines. It can be created before the "server" goroutine is created. The important part is that the server goroutine never sends anything down the channel, but simply closes it.
Now the client goroutines, simply do this:
select {
case <- ch:
fmt.Println("Channel closed, server is done!")
case <-time.After(time.Second):
fmt.Println("Timed out. do recovery stuff")
}
and the server goroutine just does:
close(ch)
More complete example:
package main
import(
"fmt"
"time"
)
func waiter(ch chan struct{}) {
fmt.Println("Doing stuff")
fmt.Println("Waiting...")
select {
case <- ch:
fmt.Println("Channel closed")
case <-time.After(time.Second):
fmt.Println("Timed out. do recovery stuff")
}
}
func main(){
ch := make(chan struct{})
go waiter(ch)
go waiter(ch)
time.Sleep(100*time.Millisecond)
fmt.Println("Closing channel")
close(ch)
time.Sleep(time.Second)
}
This can be abstracted as the following utility API:
type TimedCondition struct {
ch chan struct{}
}
func NewTimedCondition()*TimedCondition {
return &TimedCondition {
ch: make(chan struct{}),
}
}
func (c *TimedCondition)Broadcast() {
close(c.ch)
}
func (c *TimedCondition)Wait(t time.Duration) error {
select {
// channel closed, meaning broadcast was called
case <- c.ch:
return nil
case <-time.After(t):
return errors.New("Time out")
}
}
I just started working with the Boost ASIO library, version 1.52.0. I am using TCP/SSL encryption with async sockets. From other questions asked here about ASIO, it seems that ASIO does not support receiving a variable length message and then passing the data for that message to a handler.
I'm guessing that ASIO puts the data into a cyclical buffer and loses all track of each separate message. If I have missed something and ASIO does provide a way to pass individual messages, then please advise as to how.
My question is that assuming I can't somehow obtain just the bytes associated with an individual message, can I use transfer_exactly in async_read to obtain just the first 4 bytes, which our protocol always places the length of the message. Then call either read or async_read (if read won't work with async sockets) to read in the rest of the message? Will this work? Any better ways to do it?
Typically I like to take the data I receive in an async_read and put it in a boost::circular_buffer and then let my message parser layer decide when a message is complete and pull the data out.
http://www.boost.org/doc/libs/1_52_0/libs/circular_buffer/doc/circular_buffer.html
Partial code snippets below
boost::circular_buffer TCPSessionThread::m_CircularBuffer(BUFFSIZE);
void TCPSessionThread::handle_read(const boost::system::error_code& e, std::size_t bytes_transferred)
{
// ignore aborts - they are a result of our actions like stopping
if (e == boost::asio::error::operation_aborted)
return;
if (e == boost::asio::error::eof)
{
m_SerialPort.close();
m_IoService.stop();
return;
}
// if there is not room in the circular buffer to hold the new data then warn of overflow error
if (m_CircularBuffer.reserve() < bytes)
{
ERROR_OCCURRED("Buffer Overflow");
m_CircularBuffer.clear();
}
// now place the new data in the circular buffer (overwrite old data if needed)
// note: that if data copying is too expensive you could read directly into
// the circular buffer with a little extra effor
m_CircularBuffer.insert(m_CircularBuffer.end(), pData, pData + bytes);
boost::shared_ptr<MessageParser> pParser = m_pParser.lock(); // lock the weak pointer
if ((pParser) && (bytes_transferred))
pParser->HandleInboundPacket(m_CircularBuffer); // takes a reference so that the parser can consume data from the circ buf
// start the next read
m_Socket.async_read_some(boost::asio::buffer(*m_pBuffer), boost::bind(&TCPSessionThread::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
I want to write a server using a pool of worker threads and an IO completion port. The server should processes and forwards messages between multiple clients. The 'per client' data is in a class ClientContext. Data between instances of this class are exchanged using the worker threads. I think this is a typical scenario.
However, I have two problems with those IO completion ports.
(1) The first problem is that the server basically receives data from clients but I never know if a complete message was received. In fact WSAGetLastError() always returns that WSARecv() is still pending. I tried to wait for the event OVERLAPPED.hEvent with WaitForMultipleObjects(). However, it blocks forever, i.e WSARecv() never completes in my program.
My goal is to be absolutely sure that the whole message has been received before further processing starts. My message has a 'message length' field in its header, but I don't really see how to use it with the IOCP function parameters.
(2) If WSARecv() is commented out in the code snippet below, the program still receives data. What does that mean? Does it mean that I don't need to call WSARecv() at all? I am not able to get a deterministic behaviour with those IO completion ports.
Thanks for your help!
while(WaitForSingleObject(module_com->m_shutdown_event, 0)!= WAIT_OBJECT_0)
{
dequeue_result = GetQueuedCompletionStatus(module_com->m_h_io_completion_port,
&transfered_bytes,
(LPDWORD)&lp_completion_key,
&p_ol,
INFINITE);
if (lp_completion_key == NULL)
{
//Shutting down
break;
}
//Get client context
current_context = (ClientContext *)lp_completion_key;
//IOCP error
if(dequeue_result == FALSE)
{
//... do some error handling...
}
else
{
// 'per client' data
thread_state = current_context->GetState();
wsa_recv_buf = current_context->GetWSABUFPtr();
// 'per call' data
this_overlapped = current_context->GetOVERLAPPEDPtr();
}
while(thread_state != STATE_DONE)
{
switch(thread_state)
{
case STATE_INIT:
//Check if completion packet has been posted by internal function or by WSARecv(), WSASend()
if(transfered_bytes > 0)
{
dwFlags = 0;
transf_now = 0;
transf_result = WSARecv(current_context->GetSocket(),
wsa_recv_buf,
1,
&transf_now,
&dwFlags,
this_overlapped,
NULL);
if (SOCKET_ERROR == transf_result && WSAGetLastError() != WSA_IO_PENDING)
{
//...error handling...
break;
}
// put received message into a message queue
}
else // (transfered_bytes == 0)
{
// Another context passed data to this context
// and notified it via PostQueuedCompletionStatus().
}
break;
}
}
}
(1) The first problem is that the
server basically receives data from
clients but I never know if a complete
message was received.
Your recv calls can return anywhere from 1 byte to the whole 'message'. You need to include logic that works out when it has enough data to work out the length of the complete 'message' and then work out when you actually have a complete 'message'. Whilst you do NOT have enough data you can reissue a recv call using the same memory buffer but with an updated WSABUF structure that points to the end of the data that you have already recvd. In that way you can accumulate a full message in your buffer without needing to copy data after every recv call completes.
(2) If WSARecv() is commented out in
the code snippet below, the program
still receives data. What does that
mean? Does it mean that I don't need
to call WSARecv() at all?
I expect it just means you have a bug in your code...
Note that it's 'better' from a scalability point of view not to use the event in the overlapped structure and instead to associate the socket with the IOCP and allow the completions to be posted to a thread pool that deals with your completions.
I have a free IOCP client/server framework available from here which may give you some hints; and a series of articles on CodeProject (first one is here: http://www.codeproject.com/KB/IP/jbsocketserver1.aspx) where I deal with the whole 'reading complete messages' problem (see "Chunking the byte stream").