A slightly convoluted C++ asio program
This is a question about writing consumer-producer software with asio.
In a C++ program using asio, I have the following architecture:
The main thread. It listens (asynchronously) for network events. When it gets an event, its completion handler writes the message to a sqlite3 database table of tasks needing processing and then goes on listening (calls the Receive() function again).
This same thread also listens (asynchronously) to an eventfd socket, which seems a bit of a kludge. The eventfd receive function's completion handler looks up the next record to process in the sqlite3 db, calls io_service_worker.post() (see below) to process and delete it, repeats in case there are more, and when there are no more, it re-calls Receive().
A worker thread. Processes the task posted to it, deletes it from the task db, and writes to the eventfd to let its partner know it's done.
There's a bit of extra synchronisation going on, but that's the gist of it.
...that's all rather embarrassing...
This has worked for us but is inflexible. As long as there was only one type of task to do, this was fine. I know the story of how this all came to be this way, but it's clear it's no longer fit for purpose. Modifications are too hard and require too much thought.
The question
The primary constraint is at-least-once semantics and monitoring (queue lengths and latency). But we also need a simpler way to add new message types without so many nested callbacks that it's hard to reason about.
This seems like a rather simple producer-consumer problem, and as such treating it with a single-producer-single-consumer queue (to permit the worker to wait and be signalled when to start up again) should be enough.
Write a message to sqlite3 (in case we don't live long enough to complete), signal on the queue that there's work to do. On the other end, get a signal on the queue, loop over work to do, wait on the queue.
Reading the spsc_queue docs has me thinking I've missed something, that there's some part of asio I don't understand properly.
...or maybe it's me
It's also possible I'm too close to our code base and too used to this IPC system + consumer-producer paradigm that we have and I should step back further. I'm obviously open to being told that I've gone too far down this rabbit hole, that we should really just switch to something like RabbitMQ or another broker / relay that handles it all for us.
Related
I've got a ROUTER/DEALER setup where both ends need to be able to receive and send data asynchronously, as soon as it's available. The model is pretty much 0MQ's async C++ server: http://zguide.zeromq.org/cpp:asyncsrv
Both the client and the server workers poll, when there's data available they call a callback. While this happens, from another thread (!) I'm putting data in a std::deque. In each poll-forever thread, I check the deque (under lock), and if there are items there, I send them out to the specified DEALER id (the id is placed in the queue).
But I can't help thinking that this is not idiomatic 0MQ. The mutex is possibly a design problem. Plus, memory consumption can probably get quite high if enough time passes between polls (and data accumulates in the deque).
The only alternative I can think of is having another DEALER thread connect to an inproc each time I want to send out data, and just have it send it and exit. However, this implies a connect per item of data sent + construction and destruction of a socket, and it's probably not ideal.
Is there an idiomatic 0MQ way to do this, and if so, what is it?
I dont fully understand your design but I do understand your concern about using locks.
In most cases you can redesign your code to remove the use of locks using zeromq PAIR sockets and inproc.
Do you really need a std::deque? If not you could just use a zerom queue as its just a queue that you can read/write from from different threads using sockets.
If you really need the deque then encapsulate it into its own thread (a class would be nice) and make its API (push etc) accessible via inproc sockets.
So like I said before I may be on the wrong track but in 99% of cases I have come across you can always remove the locks completely with some ZMQ_PAIR/inproc if you need signalling.
0mq queue has limited buffer size and it can be controlled. So memory issue will get to some point and then dropping data will occur. For that reason you may consider using conflate option leaving only most recent data in queue.
In a case of single server and communication within single machine with many threads I suggest using publish/subscribe model where with conflate option you will receive new data as soon as you read buffer and won't have to worry about memory. And it removes blocking queue problem.
As for your implementation you are quite right, it is not best design but it is quite unavoidable. I suggest checking question Access std::deque from 3 threads while it answers your problem, it may not be the best approach.
I am working with POSIX threads for a multi-threaded socket programming project. I have run into a situation where I need to detach a thread from the main program using setdetachstate(); however, later on I cancel the thread (I know that cancelling is generally bad practice, but I know what I'm doing (hopefully)). I need a method to check whether the thread is still alive or not, and after doing a bit of research, I found that waitpid() might work for my purposes even though I have a TID instead of a PID. However, after trying it out, both with and without ptraces, it didn't work. Another method that I have seen on the Internet everywhere is pthread_join(). While I agree that it is the optimal way to do it, as I said, my thread is detached, so it can't be joined.
As a side note, my goal is to find a way to wait for the function call pthread_cancel() to finish before executing any subsequent code, i.e.
pthread_t tid;
// ...
pthread_cancel(tid);
// wait until pthread with ID tid is cancelled
// more code here...
Originally, the reason why I need to check whether the detached pthread is alive was because I was planning on doing something like this: while(!pthread_dead(tid)); or something of this manner; however, if there is a solution that directly waits for the cancel to finish, that would be even better. Please try not to criticize my use of detached threads or pthread cancelling; I have contemplated many plans of action and this seems to be required no matter how I go about it (unless I'm doing a multiprocessed application, which I don't want to do). Unless I'm doing something absolutely syntactically or structurally abominable, I would appreciate it if you just answered my question.
Thank you!
P.S. I'm coding in C++.
Have you thought about using Actor model programming, or even better Communicating Sequential Processes?
These are really quite a good model for when you have a separate thread that needs to go off and do its own thing, and you need to be able to tell it something and get an answer back.
Your apparent need is to know that something asynchronous has completed (the termination of a separate thread) - there's nothing wrong with having that thread send you a direct acknowledgement of it's termination, rather than trying to have to determine whether or not it's still alive through slightly iffy means such as waitpid(). So say you chose ZeroMQ as your Actor model library; to "kill" that detached thread you'd send it a command down a ZeroMQ "socket". The recipient thread would receive that message, understand that it means "die", and do whatever clean up it needs to before terminating itself. Just before it terminates itself, it sends you back an acknowledgement on another "socket" that yes, it is dead (or at least about to be so, all necessary cleanup has already happened).
Actor model / CSP programming places an emphasis on having a loop responding to messages from one or more sources. Well, your own code snippet hints at a loop, waiting for the pthread_cancel() to take effect.
I've put "socket" in quotes as underneath a ZeroMQ socket can be a tcp socket, ipc, some in-process memory transfer, etc; it all behaves the same. In-proc is, naturally, quite quick.
The difference between Actor model and Communicating Sequential Processes is that in Actor model, when a message is sent there is no information available to the sender that it has been received, whilst in Communicating Sequential Processes a successful send = a completed read. Personally speaking I prefer the latter - your code then has complete knowledge as to where a message recipient has got to; a send/receive are an Execution Rendezvous. So when you send the "terminate" message, you know for sure that the recipient thread has received the message and is now acting on it. When the recipient sends it's "I'm dead" acknowledgement, it knows that the command thread has received that ack.
FYI, CSP is very useful in real time systems, not because it's faster but because your program can have much better knowledge as to whether it's kept up with the real time demand or not. Actor model lets you "hide" real time inadequacies as latency in communications links.
I am working on designing a websocket server which receives a message and saves it to an embedded database. For reading the messages I am using boost asio. To save the messages to the embedded database I see a few options in front of me:
Save the messages synchronously as soon as I receive them over the same thread.
Save the messages asynchronously on a separate thread.
I am pretty sure the second answer is what I want. However, I am not sure how to pass messages from the socket thread to the IO thread. I see the following options:
Use one io service per thread and use the post function to communicate between threads. Here I have to worry about lock contention. Should I?
Use Linux domain sockets to pass messages between threads. No lock contention as far as I understand. Here I can probably use BOOST_ASIO_DISABLE_THREADS macro to get some performance boost.
Also, I believe it would help to have multiple IO threads which would receive messages in a round robin fashion to save to the embedded database.
Which architecture would be the most performant? Are there any other alternatives from the ones I mentioned?
A few things to note:
The messages are exactly 8 bytes in length.
Cannot use an external database. The database must be embedded in the running
process.
I am thinking about using RocksDB as the embedded
database.
I don't think you want to use a unix socket, which is always going to require a system call and pass data through the kernel. That is generally more suitable as an inter-process mechanism than an inter-thread mechanism.
Unless your database API requires that all calls be made from the same thread (which I doubt) you don't have to use a separate boost::asio::io_service for it. I would instead create an io_service::strand on your existing io_service instance and use the strand::dispatch() member function (instead of io_service::post()) for any blocking database tasks. Using a strand in this manner guarantees that at most one thread may be blocked accessing the database, leaving all the other threads in your io_service instance available to service non-database tasks.
Why might this be better than using a separate io_service instance? One advantage is that having a single instance with one set of threads is slightly simpler to code and maintain. Another minor advantage is that using strand::dispatch() will execute in the current thread if it can (i.e. if no task is already running in the strand), which may avoid a context switch.
For the ultimate optimization I would agree that using a specialized queue whose enqueue operation cannot make a system call could be fastest. But given that you have network i/o by producers and disk i/o by consumers, I don't see how the implementation of the queue is going to be your bottleneck.
After benchmarking/profiling I found the facebook folly implementation of MPMC Queue to be the fastest by at least a 50% margin. If I use the non-blocking write method, then the socket thread has almost no overhead and the IO threads remain busy. The number of system calls are also much less than other queue implementations.
The SPSC queue with cond variable in boost is slower. I am not sure why that is. It might have something to do with the adaptive spin that folly queue uses.
Also, message passing (UDP domain sockets in this case) turned out to be orders of magnitude slower especially for larger messages. This might have something to do with copying of data twice.
You probably only need one io_service -- you can create additional threads which will process events occurring within the io_service by providing boost::asio::io_service::run as the thread function. This should scale well for receiving 8-byte messages from clients over the network socket.
For storing the messages in the database, it depends on the database & interface. If it's multi-threaded, then you might as well just send each message to the DB from the thread that received it. Otherwise, I'd probably set up a boost::lockfree::queue where a single reader thread pulls items off and sends them to the database, and the io_service threads append new messages to the queue when they arrive.
Is that the most efficient approach? I dunno. It's definitely simple, and gives you a baseline that you can profile if it's not fast enough for your situation. But I would recommend against designing something more complicated at first: you don't know whether you'll need it at all, and unless you know a lot about your system, it's practically impossible to say whether a complicated approach would perform any better than the simple one.
void Consumer( lockfree::queue<uint64_t> &message_queue ) {
// Connect to database...
while (!Finished) {
message_queue.consume_all( add_to_database ); // add_to_database is a Functor that takes a message
cond_var.wait_for( ... ); // Use a timed wait to avoid missing a signal. It's OK to consume_all() even if there's nothing in the queue.
}
}
void Producer( lockfree::queue<uint64_t> &message_queue ) {
while (!Finished) {
uint64_t m = receive_from_network( );
message_queue.push( m );
cond_var.notify_all( );
}
}
Assuming that the constraint of using cxx11 is not too hard in your situtation, I would try to use the std::async to make an asynchronous call to the embedded DB.
I'm working on my own FTP client in C++, but I'm stuck at function recv(). When I get data with recv(), they can be incomplete, because I'm using TCP protocol, so I have to use recv in loop. Problem is that when I call recv after everything that should be received was received server blocks, and my program is stuck.
I don't know how many bytes im going to recieve so I can't control it and stop it when its done. I found two not very elegant solutions right now:
is to use string.substr() (or TR1 regex) to find needed
expression and then stop calling recv before it blocks
second is to
set up timeval structure and then control socket through
setsockopt() function. Problem is long response time when i can get
incomplete corrupted data.
Question is, is there any clean and elegant solution for this?
The obvious thing to do is to transmit the length of the to-be-received message ahead (many protocols, including for example HTTP do that, to address the exact same issue). That way, you know that when you have received amount X, no more will come.
This will work fine 99.9% of the time and will catastrophically fail in the 0.1% of cases where the server is lying to you or where the server crashes unexpectedly or someone stumbles over the network cable (or something similar happens). Sadly, the "connection" established by TCP is an illusion, and you don't have much of a means to detect when the connection dies. The other end can go down, and you will not notice anything, unless you try to send and get an error (or until several hours later).
Therefore, you also need a backup strategy for when things don't go quite as good as expected. You might either use select or poll to know when data is available, so you don't block forever for a message that will never come.
Using threads to solve the block-at-end problem (as proposed in other answers) is not a very good option since blocking isn't the actual problem. The actual problem is that you don't know when you have reached the end of the transmission. Having a worker thread block at the end of the transmission will "work", but will leave the worker thread blocked indefinitely, consuming resources and with an uncertain, system-dependent fate.
You cannot join the thread before exiting, since it is blocked (so trying to join it would deadlock your main thread). When your process exits and the socket is closed, the thread will unblock, but will (at least on some operating systems, e.g. Windows) be terminated immediately after. This likely won't do much evil, but terminating a thread in an uncontrolled way is always less desirable than having it exit properly. On other operating systems, you may have a lingering thread remaining.
Since you are using C++, there are alternative libraries that greatly simplify network programming compared to stock C. My personal favourite is Boost::Asio, however others are available. These libraries not only save you the pain of coding in C, but also provide asynchronous capabilities to work around your blocking problem.
The typical approach is to use select()/pselect() or poll()/ppoll(). Both allow to specify a timeout in order to exit if there are no incoming data.
However I don't see how you should "call recv after everything that should be received". It would be extremely inefficient to rely on the timeout also when there are not network problems...
Or you send the size of data being sent, before the data, and that's what you read, or the data connection is terminated with an EOF. In this case read() will return -1 and you exit.
I can think of two options that will not require a major rewrite of your existing code and a third one which is more radical:
use non-blocking I/O and poll for data periodically. You can do other work while a message remains incomplete or no further data can be read from the socket.
use a separate worker thread to do the I/O. Even if it blocks on synchronous recv() calls, your main thread can continue to do work. The worker thread can transfer the data it receives to the main thread for processing once a complete message is received via TCP.
use an OS specific feature (I/O completion ports on Windows or aio on Linux), but these are far more complex and you should definitely consider Boost.Asio before going this route.
You can put the recv function in it's own thread and do the processing in another thread.
Our team is implementing a VNC viewer (=VNC client) on Windows. The protocol (called RFB) is stateful, meaning that the viewer has to read 1 byte, see what it is, then read either 3 or 10 bytes more, parse them, and so on.
We've decided to use asynchronous sockets and a single (UI) thread. Consequently, there are 2 ways to go:
1) state machine -- if we get a block on socket reading, just remember the current state and quit. Later on, a socket notification will arrive and the interrupted logic will resume from the proper stage;
2) inner message loop -- once we determine that reading from the socket would block, we enter an inner message loop and spin there until all the necessary data is finally received.
UI is not thus frozen in case of a block.
As experience showed, the second approach is bad, as any message can come while we're in the inner message loop. I cannot tell the full story here, but it simply is not reliable enough. Crashes and kludges.
The first option seems to be quite acceptable, but it is not easy to program in such a style. One has to remember the state of an algorithm and values of all the local variables required for further processing.
This is quite possible to use multiple threads, but we just thought that the problems in this case would be even much harder: synchronization of frame-buffer access, multi-threading issues, etc. Moreover, even in this variant it seems necessary to use asynchronous sockets as well.
So, what way is in your opinion the best ?
The problem is quite a general one. This is the problem of organizing asynchronous communication through stateful protocols.
Edit 1: We use C++ and MFC as UI framework.
I've done a few parallel computing projects and it seems that MPI (Message Passing Interface) might be helpful to your VNC project. You're probably not so interested in the parallel computing power provided by MPI, but you may want to use the simplified socket-like interface for asynchronous communication over a network.
http://www.open-mpi.org/
You can find other implementations of MPI and tons of use examples from google.
Don't bother with CSocket, you'll move to CAsyncSocket in the end because of the extra control you get (interrupting, shutting down etc.). I'd also recommend using a separate thread to manage the communication, it adds complexity but keeping the UI responsive should be a top priority.
I think you will find that your design will be simplified greatly by using a separate thread to handle a blocking socket.
The main reason for this is you don't need to spin and wait. The UI remains responsive while the network thread(s) block when it has nothing to do and comes back when it has stuff to do. You are effectively offloading a large portion of your overhead to the OS.
Remember, RFB does not require a whole lot of state info to work. Because client to server messages are short; there is nothing requiring you to receive a frame buffer before you send your next pointer input.
My point being is messages in RFB can be intermixed; the server will work on your schedule.
Now, Windows provides easy to use synchronization API's that while not always the most efficient, are more than enough for your purposes and will ease getting a proof of concept up and going.
Take a look at Windows Synchronization and specifically Critical Sections
Just my 2cents, I've implemented both a vnc server and client on windows, these were my impressions.