waiting for 2 different events in a single thread - c++

REMOVED - reason: not really needed.
my questions are:
can I use a linux UDP socket from two different threads? answer was here
I have two different events I would like to wait for using just one thread. One of such events is the addition of an element to a stack and another is the availability of data on a socket.
I can use a boost::condition_variable.wait(lock) for the stack and boost::asio::io_service for the socket. But there is no mechanism (that I am aware of) that allows me to wait for both events at the same time (polling is out of the question). Or is it?
Is there any other alternative solution for this problem that I'm not aware of? - I'll figure this one out by myself.

New Answer
But there is no mechanism (that I am
aware of) that allows me to wait for
both events at the same time (polling
is out of the question). Or is it?
Not that I'm aware of, and not without polling... you'll need a thread to wait for each asynchronous event. You can use a blocking stack or like you said use boost::condition_variable which blocks until there is something on the stack. The boost::asio::io_service will be very useful for managing the udp sockets, but it doesn't actually give you any advantage when it comes to the event handling.
Old Answer
I'm REALLY not sure what you're trying to do... what you're saying doesn't make much sense. I'll do my best to guess what you're trying to do, but I would suggest clarifying the question.
Question:
Do I really need to use the main
thread to send the data over component
A socket or can I do it from the
new-thread? (I think the answer is no,
but I'm not sure about race conditions
on sockets)
Answer:
You don't have to use the main thread to send data over the given component's socket. Now depending on the socket library you're using there might be different restrictions: you may only be able to send data on the same thread that the socket was created, or you might be able to send data from any thread... it really depends on the implementation of your socket.
Question:
how to I wait for both events?
Answer:
You can't do two things at the same time in the same thread... with that said you have two options:
Constantly poll to see if either event has occurred (on the same thread).
Have two threads that are blocking until a desired event occurs (usually when you read from a socket it blocks if there is no data).
Given the description of your problem it's unclear what you would achieve by using boost::condition_variable and/or boost::asio::io_service. Perhaps you should give us a very simple example of code that we can follow.
Question:
Is there any other alternative
solution for this problem that I'm not
aware of?
Answer:
There are always alternative solutions out there, but it's really difficult to tell what the alternatives might be given the current description of the "problem." I think that you should edit the problem again and focus on providing very concrete examples, perhaps some pseudo code, etc.

Switch to Windows and use WaitForMultipleObjects, or get this function implemented in Linux. It's quite handy, and then you can do two things on the same thread.

Related

What's the most efficient way to async send data while async receiving with 0MQ?

I've got a ROUTER/DEALER setup where both ends need to be able to receive and send data asynchronously, as soon as it's available. The model is pretty much 0MQ's async C++ server: http://zguide.zeromq.org/cpp:asyncsrv
Both the client and the server workers poll, when there's data available they call a callback. While this happens, from another thread (!) I'm putting data in a std::deque. In each poll-forever thread, I check the deque (under lock), and if there are items there, I send them out to the specified DEALER id (the id is placed in the queue).
But I can't help thinking that this is not idiomatic 0MQ. The mutex is possibly a design problem. Plus, memory consumption can probably get quite high if enough time passes between polls (and data accumulates in the deque).
The only alternative I can think of is having another DEALER thread connect to an inproc each time I want to send out data, and just have it send it and exit. However, this implies a connect per item of data sent + construction and destruction of a socket, and it's probably not ideal.
Is there an idiomatic 0MQ way to do this, and if so, what is it?
I dont fully understand your design but I do understand your concern about using locks.
In most cases you can redesign your code to remove the use of locks using zeromq PAIR sockets and inproc.
Do you really need a std::deque? If not you could just use a zerom queue as its just a queue that you can read/write from from different threads using sockets.
If you really need the deque then encapsulate it into its own thread (a class would be nice) and make its API (push etc) accessible via inproc sockets.
So like I said before I may be on the wrong track but in 99% of cases I have come across you can always remove the locks completely with some ZMQ_PAIR/inproc if you need signalling.
0mq queue has limited buffer size and it can be controlled. So memory issue will get to some point and then dropping data will occur. For that reason you may consider using conflate option leaving only most recent data in queue.
In a case of single server and communication within single machine with many threads I suggest using publish/subscribe model where with conflate option you will receive new data as soon as you read buffer and won't have to worry about memory. And it removes blocking queue problem.
As for your implementation you are quite right, it is not best design but it is quite unavoidable. I suggest checking question Access std::deque from 3 threads while it answers your problem, it may not be the best approach.

Efficiency and timeout for network servers

I'm in a situation where I have to "ping" [not ICMP] (by ping I mean, use an application protocol I made in order to signal multiple sockets to see if they have died or not [similar to a watchdog timer]).
Since I was limited to asynchronous selecting in my library (bound to a window message loop), I decided to improve it's efficiency by instead of receiving the data directly by the GUI messages - to forward it to a threadpool via a data structure and let a queue of threads handle it.
Right, my initial idea was to use 2 semaphores - one to handle a blocking queue (of IO requests) and another semaphore to handle all the timeout pings.
Does this seem like a reasonable idea? Is there a better solution; perhaps a timer, mutex or something else?
A second question that I might ask would be - apart from a synchronization object, is there any other way I can create a blocking container? I'm not accepting Sleep(1) solutions by the way.
Thank you.

catching socket and signal events with pselect

I'm making a messaging service that needs to use both socket io and shared memory. The routine will be the same regardless of where the input comes from, with the only difference being local messages will be passed via shared memory and non-local messages over a socket. Both events will have to unblock the same pselect call.
At this point I think the best option might be to send a signal whenever a message is written to shared memory and use it to interrupt a pselect call but I'm not quite sure how this would be done or even if it's the best route.
I'm not used to using signals. what's the best way to accomplish this?
I would consider using a pipe (see pipe(2)) or an AF_UNIX local unix(7) socket(2) (as commented by caf) at least to transmit control information -for synchronization- about the shared memory (i.e. tell when it has changed, that is when a message has been sent thru shared memory, etc.). Then you can still multiplex with e.g. poll(2) (or ppoll(2) or pselect(2) etc...)
I don't think that synchronization using signals is the right approach: signals are difficult to get right (so coding is tricky) and they are not more efficient than exchanging a few bytes on some pipe.
Did you consider to use MPI?
If you only want to signal between processes rather than pass data, then an eventfd (see eventfd(2)) will allow you to use select() with less overhead than a pipe. As with a pipe solution, the processes will require a parent/child relationship.
If you want to use signals, use sigqueue to send them - you can send an integer payload with this, for example an offset into your shared memory.
Make sure to register your signal handler with sigaction and use the sa_sigaction callback: the siginfo_t->si_int member will contain that payload.
In general, I'm not sure I can recommend using this mechanism instead of a unix pipe or eventfd, because I'm not sure whether signal delivery is really tuned for speed as you hope: benchmark to be sure.
PS. performance aside, one reason signals feel a bit icky is that you lose the opportunity to have a "well-known" rendezvous like an inet or unix port, and instead have to go looking for a PID. Also, you have to be very careful about masking to make sure the signal is delivered where you want.
PPS. You raise or send a signal - you don't throw it. That's for exceptions.
I did some additional looking and came across signalfd(2). I believe this will be the best solution - very similar to Basile Starynkevitch's suggestion but without the overhead of standard pipes and done within the kernel rather than userspace.
pipe+select+queue+lock, nothing else.

C++ Fastest Way to Hit a URL

I'm trying to ping a URL on a server in the middle of my high-performance C++ application, where every millisecond is critical. I don't care about the return data from the query... I just need to send a HTTP request to a specific URL (to cause it to load), and I'm trying to find the most effective, non-blocking method to accomplish this.
My application uses Boost::ASIO, but most methods to do this seem to involve building and tearing down sockets each time (which might unfortunately be necessary), but I'm hoping there's a basic C/C++ socket one-liner that won't cause any overhead, memory leaks, blocking, etc. Just quickly open a socket, shoot the HTTP request off, and move along.
And this will need to happen thousands of times per second, so sockets and overhead is important (don't want to flood the OS).
Anyone have any advice on the most efficient way to accomplish this?
Thanks so much!
With thousands of notifications sent per second, I can't imagine opening a socket connection for each one. That would probably be too inefficient due to the overhead. So, as Casey suggested, try using a dedicated connection.
Since it sounds like you are doing quite a bit of processing on your main thread, you might consider creating a worker thread for the socket work. You will probably need to use thread synchronization objects like a mutex or critical section to single thread the code - at least when updating a container (probably a queue) from your main thread and reading it from the worker thread.

Reading information from a worker thread efficiently

I'm writing some computer vision software, here's a brief description to help clarify the problem:
I have 3 cameras, each running at 60fps
Each camera has it's own thread, to utilise multiple cores
Each thread waits for a new frame to arrive, does some processing on the image, saves the result and waits for the next frame
My main program creates these thread, using boost, following this tutorial: http://blog.emptycrate.com/node/282
I am currently polling the threads in a tight loop to retrieve the data, e.g.:
while(1) {
for(i=0; i<numCams; i++) {
result[i] = cam[i]->getResult();
}
//do some stuff
}
This seems silly. Is there a standard way of letting the main program know that there is a new result and that it needs to be retrieved?
Thanks!
Yes, you need to use condition variables (AKA events).
Yes, you need to use synchronization. There are many forms depending on what you're using as a threading API, however the simplest is probably a condition variable.
What you need is a thread pool. The number of cameras isn't necessary the same as the optimal number of threads. Thread pool is optimized for performance. Then, you don't need to wait for condition or poll the jobs, you enqueue the jobs (most often it's std::function<void()>) in the thread pool, and that job object should perform all the required work. Use binders (std::bind) or lambda functions to create a job object.
In your case you are talking to hardware, so you may need to use whatever facilities your camera API provides for asynchronous notification of incomming data. Usually that will be some kind of callback you provide, or occasionally something like a Windows Event handle or Unix signal.
In general if you meant "standard" as in "part of the C++ standard", no. You need to use your OS's facilites for interprocess (or thread) condition signalling.
Note that if we were talking Ada (or Modula-2, or many other modern systems programming languages) the answer would have been "yes". I understand there is some talk of putting concurrency support of some kind into a future C++ standard.
In the meantime, there is the boost::thread library for doing this kind of thing. That isn't exactly "standard", but for C++ it is pretty close. I think for what you are trying to do, condition variables might be what you want. However, if you read over the whole facility, other simpler designs may occur to you.
I know this sounds a little odd, however consider using a boost::asio::io_service it's as close to a threadpool as you get currently. When you've captured an image, you can post to this service and the service can then execute a handler asynchronously to handle your image data.