boost::asio asynchronous operations and resources - c++

So I've made a socket class that uses boost::asio library to make asynchronous reads and writes. It works, but I have a few questions.
Here's a basic code example:
class Socket
{
public:
void doRead()
{
m_sock->async_receive_from(boost::asio::buffer(m_recvBuffer), m_from, boost::bind(&Socket::handleRecv, this, boost::asio::placeholders::error(), boost::asio::placeholders::bytes_transferred()));
}
void handleRecv(boost::system::error_code e, int bytes)
{
if (e.value() || !bytes)
{
handle_error();
return;
}
//do something with data read
do_something(m_recvBuffer);
doRead(); //read another packet
}
protected:
boost::array<char, 1024> m_recvBuffer;
boost::asio::ip::udp::endpoint m_from;
};
It seems that the program will read a packet, handle it, then prepare to read another. Simple.
But what if I set up a thread pool? Should the next call to doRead() be before or after handling the read data? It seems that if it is put before do_something(), the program can immediately begin reading another packet, and if it is put after, the thread is tied up doing whatever do_something() does, which could possibly take a while. If I put the doRead() before the handling, does that mean the data in m_readBuffer might change while I'm handling it?
Also, if I'm using async_send_to(), should I copy the data to be sent into a temporary buffer, because the actual send might not happen until after the data has fallen out of scope? i.e.
void send()
{
char data[] = {1, 2, 3, 4, 5};
m_sock->async_send_to(boost::buffer(&data[0], 5), someEndpoint, someHandler);
} //"data" gets deallocated, but the write might not have happened yet!
Additionally, when the socket is closed, the handleRecv will be called with an error indicating it was interrupted. If I do
Socket* mySocket = new Socket()...
...
mySocket->close();
delete mySocket;
could it cause an error, because there is a chance that mySocket will be deleted before handleRecv() gets called/finished?

Lots of questions here, I'll try to address them one at a time.
But what if I set up a thread pool?
The traditional way to use a thread pool with Boost.Asio is to invoke io_service::run() from multiple threads. Beware this isn't a one-size-fits-all answer though, there can be scalability or performance issues, but this methodology is by far the easiest to implement. There are many similar questions on Stackoverflow with more information.
Should the next call to doRead be before or after handling the read
data? It seems that if it is put before do_something(), the program
can immediately begin reading another packet, and if it is put after,
the thread is tied up doing whatever do_something does, which could
possibly take a while.
This really depends on what do_something() needs to do with m_recvBuffer. If you wish to invoke do_something() in parallel with doRead() using io_service::post() you will likely need to make a copy of m_recvBuffer.
If I put the doRead() before the handling, does
that mean the data in m_readBuffer might change while I'm handling it?
as I mentioned previously, yes this can and will happen.
Also, if I'm using async_send_to(), should I copy the data to be sent
into a temporary buffer, because the actual send might not happen
until after the data has fallen out of scope?
As the documentation describes, it is up to the caller (you) to ensure the buffer remains in scope for the duration of the asynchronous operation. As you suspected, your current example invokes undefined behavior because data[] will go out of scope.
Additionally, when the socket is closed, the handleRecv() will be called
with an error indicating it was interrupted.
If you wish to continue to use the socket, use cancel() to interrupt outstanding asynchronous operations. Otherwise, close() will work. The error passed to outstanding asynchronous operations in either scenario is boost::asio::error::operation_aborted.

Related

asio underlying behavior in async_receive

I have worked with asio library for a few projects and have always managed to get it work, but I feel there are somethings of it that I have not entirely/clearly understood so far.
I am wondering how async_receive works.
I googled around a bit and had a look at the implementation but didn't understand it quite well. This is the way that I often use async communication:
socket.async_receive(receive_buffer, receiveHandler);
where receiveHandler is the function that will be called upon the arrival of data on the socket.
I know that the async_receive call return immediately. So here are my questions:
Does async_receive create a new thread each time it is called?
If not, does it mean that there is a thread responsible to waiting for data and when it arrives, it calls the handler function? When does this thread get created?
If I were to turn this call into a recursive call by using a lambda function like this:
void cyclicReceive() {
// Imagine the whole thing is in a class, so "this" means something here and
// "receiveHandler" is still a valid function that gets called.
socket.async_receive(receive_buffer,
[this](const asio::error_code& error_code, const std::size_t num_bytes)
{
receiveHandler(error_code, num_bytes);
cyclicReceive();
});
}
is there any danger of stack overflow? Why not?
I tried to show a minimal example by removing unnecessary details, so the exact syntax might be a bit wrong.
Asio does not create implicitly any new threads. In general it is based on queue of commands. When you call io.run() the framework is taking the commands from the queue and executing them until queue is empty. All the async_ operations in ASIO push new commands to the internal queue.
Therefore there is no risk of stack overflow. Worst possible but not really probable scenario is out_of_memory exception when there is no space left for commands in command queue (which is very unlikely).

Asio async_read blocking when called inside it's handler

I'm testing a little protocol design of mine and having trouble getting a continuous async_read to work. My idea was to create a generic read handler that outputs the received data (testing) and then checks it to perform protocol-defined actions.
This means that I am calling a new async_read from within this generic handler, which for some reason blocks and my handler never returns, blocking further execution of my program.
The relevant code
void handle_read_server(const asio::error_code&error_code, size_t bytes_transferred, void *adress)
{
// [...]
char HELO[4] = {'H','E','L','O'};
if (*received.data() == *HELO)
{
cout << "[Protocol] got HELO";
got_helo = true;
short code_data;
asio::async_read(socket_server, asio::buffer(&code_data, 2), asio::transfer_all(), std::bind(handle_read_server, placeholders::_1, placeholders::_2, &code_data)); // This read is blocking my program to continue it's execution.
}
}
What I'm asking
What is causing the function to block here? Is there anything I can do appart from having an async_read thread run all the time passing any received values to a stream in the server?
The async_* call does, in fact, not block.
You have Undefined Behaviour, by passing the address of a local variable into the async operation/completion handler.
You have to ensure the buffer's lifetime extends till after the completion. The natural way to achieve that would be to make the buffer a member of the enclosing class.

Will async_receive_from writes in the buffer if the ioservice is busy handling a callback?

So, suppose I have the following callback for async_recv_from
void recv_callback(error_code&, std::size_t len) {
socket.async_recv_from(buffer,endpoint,recv_callback);
handle(buffer);
}
So, the first thing I do in the callback is request more receives, but since the ioservice is busy handling the callback, I though that maybe my buffer would not be overwritten before the callback is finished. Is that correct?
This depends.
It depends on the way the underlying IO operations are actually implemented. I think some OSes might actually write directly into user-space memory.
I'd always hand-off the actual buffer.

Boost mutex usage for multipart processes

I have a c++ program with a socket communications class. Each socket has a large dedicated
buffer for assembling an output message, so usage would be like:
class CSocketClass {
public:
SetMsgHeader(int n) { Mutex_.lock(); DoWhateverIsNeededToSetHeaderInBuffer(n); } // where n would be the message type
SetMsgField(double a); { DoWhateverIsNeededToSetDataInBuffer(a); } // where a would be some arbitrary content
SendMsg(); { DoWhateverIsNeededToSendBuffer(); Mutex_.unlock(); } // where this would send the number of bytes added to the buffer since the header was set
private:
char buffer[reallylarge];
MiscSocketApparatus...
boost::mutex Mutex_;
};
Multiple threads could be trying to send messages, each consisting of three or more calls the set the header, the content, and finally sending the message on its way. To keep them from conflicting, I've tried to keep only a single writer at a time by using the Mutex. The desired behavior would be for a second-to-arrive writer to be blocked until the first-to-arrive writer unlocked the mutex. Then the blocked writer would be able to proceed.
This seems to work most of the time, but on rare occasions (not every day), deadlocks still seem to occur.
I'm much more familiar with simpler lock issues using scoped locks, but those concepts may not translate perfectly to this problem, where the lock needs to be persistent across a number of calls to the object owning the lock.
From reading the Boost synchronication tutorial, I think there are better ways to do this, but its not clear what would be best.
Any recommendations would be greatly appreciated.
Since each thread has its own buffer, have each build the complete message in its own buffer, then lock the mutex and send the message.
Better still, have one thread to actually dispatch messages, and N threads to create them. Put a thread-safe queue in between, so a thread creates a message, puts it in the queue, then (if needed) goes back to creating another message. The message sender just constantly waits for a message in the queue, retrieves it, sends it, and repeats.
You probably also want a thread-safe collection of buffers, so when a message has been sent, the sending thread can put the buffer where a message-builder thread can use it again when needed.
As an aside: for the buffer I'd use an std::string or a std::vector, instead of a raw array.

About write buffer in general network programming

I'm writing server using boost.asio. I have read and write buffer for each connection and use asynchronized read/write function (async_write_some / async_read_some).
With read buffer and async_read_some, there's no problem. Just invoking async_read_some function is okay because read buffer is read only in read handler (means in same thread usually).
But, write buffer need to be accessed from several threads so it need to be locked for modifying.
FIRST QUESTION!
Are there any way to avoid LOCK for write buffer?
I write my own packet into stack buffer and copy it to the write buffer. Then, call async_write_some function to send the packet. In this way, if I send two packet in serial, is it okay invoking async_write_some function two times?
SECOND QUESTION!
What is common way for asynchronized writing in socket programming?
Thanks for reading.
Sorry but you have two choices:
Serialise the write statement, either with locks, or better
start a separate writer thread which reads requests from
a queue, other threads can then stack up requests on the
queue without too much contention (some mutexing would be required).
Give each writing thread its own socket!
This is actually the better solution if the program at the other end
of the wire can support it.
Answer #1:
You are correct that locking is a viable approach, but there is a much simpler way to do all of this. Boost has a nice little construct in ASIO called a strand. Any callback that has been wrapped using the strand will be serialized, guaranteed, no matter which thread executes the callback. Basically, it handles any locking for you.
This means that you can have as many writers as you want, and if they are all wrapped by the same strand (so, share your single strand among all of your writers) they will execute serially. One thing to watch out for is to make sure that you aren't trying to use the same actual buffer in memory for doing all of the writes. For example, this is what to avoid:
char buffer_to_write[256]; // shared among threads
/* ... in thread 1 ... */
memcpy(buffer_to_write, packet_1, std::min(sizeof(packet_1), sizeof(buffer_to_write)));
my_socket.async_write_some(boost::asio::buffer(buffer_to_write, sizeof(buffer_to_write)), &my_callback);
/* ... in thread 2 ... */
memcpy(buffer_to_write, packet_2, std::min(sizeof(packet_2), sizeof(buffer_to_write)));
my_socket.async_write_some(boost::asio::buffer(buffer_to_write, sizeof(buffer_to_write)), &my_callback);
There, you're sharing your actual write buffer (buffer_to_write). If you did something like this instead, you'll be okay:
/* A utility class that you can use */
class PacketWriter
{
private:
typedef std::vector<char> buffer_type;
static void WriteIsComplete(boost::shared_ptr<buffer_type> op_buffer, const boost::system::error_code& error, std::size_t bytes_transferred)
{
// Handle your write completion here
}
public:
template<class IO>
static bool WritePacket(const std::vector<char>& packet_data, IO& asio_object)
{
boost::shared_ptr<buffer_type> op_buffer(new buffer_type(packet_data));
if (!op_buffer)
{
return (false);
}
asio_object.async_write_some(boost::asio::buffer(*op_buffer), boost::bind(&PacketWriter::WriteIsComplete, op_buffer, boost::asio::placeholder::error, boost::asio::placeholder::bytes_transferred));
}
};
/* ... in thread 1 ... */
PacketWriter::WritePacket(packet_1, my_socket);
/* ... in thread 2 ... */
PacketWriter::WritePacket(packet_2, my_socket);
Here, it would help if you passed your strand into WritePacket as well. You get the idea, though.
Answer #2:
I think you are already taking a very good approach. One suggestion I would offer is to use async_write instead of async_write_some so that you are guaranteed the whole buffer is written before your callback gets called.
You could queue your modifications and perform them on the data in the write handler.
Network would most probably be the slowest part of the pipe (assuming your modification are not
computationaly expensive), so that you could perform mods while the socket layer is sending the
previous data.
Incase you are handling large number of clients with frequent connect/disconnect take a look at
IO completion ports or similar mechanism.