SOCKET sock = generate_socket("fileWizard");
notifier = new QSocketNotifier(sock, QSocketNotifier::Read, this);
connect(notifier, SIGNAL(activate(int)), this, some_slot(int));
The SOCKET is a win32 SOCKET, the function of "generate_socket" is creating a socket connect to a local exe which called "fileWizard"(don't know the implementation details of the function generate_socket).
With Qt, we always generate the socket and connect the signal and slot, but can't find a similar example in asio.
Do not familiar to socket and asio yet, please tell me what information you need. Thanks
Edit :
The purposes of the codes are monitoring the SOCKET, if there are any change of it, it will call the call back.
Similar to the example of asio(Daytime.3 - An asynchronous TCP daytime server)
The part which make me confuse is
1 : How could I transform the SOCKET to one of the boost::asio socket?
2 : How could I monitor the "change"(anything can read) of the socket(our seniors called it file descriptor)?By read_async?
Boost.Asio sockets support being created on top of an existing native socket through an overloaded constructor. For example, this constructor could be used to build a basic_stream_socket on top of an existing native socket, such as a Windows SOCKET.
While Boost.Asio does not provide the direct equivalent of Qt's QSocketNotifier class, Boost.Asio does supports reactor-style operations by using null_buffers(). Both approaches allow the application to be notified when an event occurs, such as when data is ready to be read from a file descriptor. This event notification capability allows for each event loop to integrate with other event loops or third-party libraries. For a complete example that uses null_buffers(), see the official Boost.Asio non-blocking example.
Related
I've got a C++ application that is using ZeroMQ for some messaging. But it also has to provide a SGCI connection for an AJAX / Comet based web service.
For this I need a normal TCP socket. I could do that by normal Posix sockets, but to stay cross platform portable and make my life easier (I hope...) I was thinking of using Boost::ASIO.
But now I have the clash of ZMQ wanting to use it's own zmq_poll() and ASIO it's io_service.run()...
Is there a way to get ASIO to work together with the 0MQ zmq_poll()?
Or is there an other recommended way to achieve such a setup?
Note: I could solve that by using multiple threads - but it's only a little single core / CPU box that'll run that program with a very low amount of SCGI traffic, so multithreading would be a waste of resources...
After reading the documentation here and here, specifically this paragraph
ZMQ_FD: Retrieve file descriptor associated with the socket The ZMQ_FD
option shall retrieve the file descriptor associated with the
specified socket. The returned file descriptor can be used to
integrate the socket into an existing event loop; the ØMQ library
shall signal any pending events on the socket in an edge-triggered
fashion by making the file descriptor become ready for reading.
I think you can use null_buffers for every zmq_pollitem_t and defer the event loop to an io_service, completely bypassing zmq_poll() altogether. There appear to be some caveats in the aforementioned documentation however, notably
The ability to read from the returned file descriptor does not
necessarily indicate that messages are available to be read from, or
can be written to, the underlying socket; applications must retrieve
the actual event state with a subsequent retrieval of the ZMQ_EVENTS
option.
So when the handler for one of your zmq sockets is fired, you'll have to do a little more work before handling the event I think. Uncompiled pseudo-code is below
const int fd = getZmqDescriptorSomehow();
boost::asio::posix::stream_descriptor socket( _io_service, fd );
socket->async_read_some(
boost::asio::null_buffers(),
[=](const boost::system::error_code& error)
{
if (!error) {
// handle data ready to be read
}
}
);
note you don't have to use a lambda here, boost::bind to a member function would be sufficient.
In the end I figured out there are two possible solutions:
Sam Miller's where we use the event loop of ASIO
The ZeroMQ's event loop by getting the ASIO file descriptors though the .native() methods of the acceptor and the socket and inserting them into the array of zmq_pollitem_t
I have accepted the answer of Sam Miller as that's for me the best solution in SCGI case where constantly new connections are created and ended. Handling the thus every changing zmq_pollitem_t array is big hassle that can be avoided by using the ASIO event loop.
Obtaining the socket to ZeroMQ is the smallest part of the battle. ZeroMQ is based on a protocol which is layered over TCP, so you will have to reimplement ZeroMQ within a custom Boost.Asio io_service if you go this route. I ran into the same problem when creating an asynchronous ENet service using Boost.Asio by first simply trying to catch traffic from an ENet client using a Boost.Asio UDP service. ENet is a TCP like protocol layered over UDP, so all I achieved at that point was catching packets in a virtually useless state.
Boost.Asio is template based, and the built-in io_service's use templates to basically wrap the system socket library to create TCP and UDP service. My final solution was to create a custom io_service that wrapped the ENet library rather than the systems socket library, allowing it to use ENet's transport functions rather than having to reimplement them using the built-in UDP transport.
The same can be done for ZeroMQ, but ZeroMQ is already a very high performance network library in it's own right that already provides async I/O. I think you can create a viable solution by receiving messages using ZeroMQ's existing API and passing the messages into a io_service thread pool. That way messages/tasks will still be handled asynchronously using Boost.Asio's reactor pattern without having to re-write anything. ZeroMQ will provide the async I/O, Boost.Asio will provide the async task handlers/workers.
The existing io_service can still be coupled to an existing TCP socket as well, allowing the threadpool to handle both TCP (HTTP in your case) and ZeroMQ. It's entirely possible in such a setup for the ZeroMQ task handlers to access the TCP services session objects, allowing you to send results of the ZeroMQ message/task back to a TCP client.
The following is just to illustrate the concept.
// Create a pool of threads to run all of the io_services.
std::vector<boost::shared_ptr<boost::thread> > threads;
for(std::size_t i = 0; i < thread_pool_size_; ++i) {
boost::shared_ptr<boost::thread> thread(new boost::thread(boost::bind(&boost::asio::io_service::run, &io_service_)));
threads.push_back(thread);
}
while (1) {
char buffer [10];
zmq_recv (responder_, buffer, 10, 0);
io_service_.post(boost::bind(&server::handle_zeromq_message, buffer, this));
}
2 years after this question someone posted a project which does exactly this. The project is here: https://github.com/zeromq/azmq. The blog post discussing the design is here: https://rodgert.github.io/2014/12/24/boost-asio-and-zeromq-pt1/.
Here is the sample code copied from the readme:
#include <azmq/socket.hpp>
#include <boost/asio.hpp>
#include <array>
namespace asio = boost::asio;
int main(int argc, char** argv) {
asio::io_service ios;
azmq::sub_socket subscriber(ios);
subscriber.connect("tcp://192.168.55.112:5556");
subscriber.connect("tcp://192.168.55.201:7721");
subscriber.set_option(azmq::socket::subscribe("NASDAQ"));
azmq::pub_socket publisher(ios);
publisher.bind("ipc://nasdaq-feed");
std::array<char, 256> buf;
for (;;) {
auto size = subscriber.receive(asio::buffer(buf));
publisher.send(asio::buffer(buf));
}
return 0;
}
Looks nice. If you try, let me know in the comments if it still works in 2019 [I will probably try in a couple of months and then update this answer] (the repo is stale, last commit was a year ago)
The solution is to poll your io_service as well instead of run().
Check out this solution for some poll() info.
Using poll instead of run will allow you to poll zmq's connections without any blocking issues.
I have my C++ program that forks into two processes, 1 (the original) and 2 (the forked process).
In the forked process (2), it execs program A that does a lot of computation.
The original process (1) communicates with that program A through standard input and output redirected to pipes.
I am trying to add a websocket connection to my code in the original process (1). I would like my original process to effectively select or epoll on whether there is data to be read from the pipe to program A or there is data to be read from the websocket connection.
Given that a beast websocket is not a file descriptor how can I do the effect of select or epoll?
Which version of Boost are you using? If it is relatively recent it should include support for boost::process::async_pipe which allows you to use I/O Objects besides sockets asynchronously with Asio. Examples are provided in the tutorials for the boost::process library. Since Beast uses the Asio library to perform I/O under the hood, you can combine the two quite easily.
Given that a beast websocket is not a file descriptor...
The Beast WebSocket is not a file descriptor, but it does use TCP sockets to perform I/O (see the linked examples above), and Asio is very good at using select/epoll with TCP sockets. Just make sure you are doing the async_read, async_write and io_service::run operations as usual.
you can make little change in your code. Replace the pipe with two Message Queue. For example out_q and response_q. Now your child process A will continuously read out_q and whenever your main process drop a message to out_q your main process will not wait for any response from child and your child will consume that message. Communication through message queue is asynchronous. But if you still need a kind of reply like any success or failure message from the child you can get it through response_q which will be read by your parent process. To know the response from child against a specific message originally sent from parent, you can use correlation id. (Read little about correlation id).
Now in parent process implement two 2 threads one will continuously read to web call and other one will read to standard input. And one method (probably static) which will be connected to out_q to drop message. Use mutex so that only one thread can call it and drop message to the out_q. Your main thread or process will read the response_q . In this way you can make everything parallel and asynchronous. If you don’t want to use thread still you have option for you fork() and create two child process for the same. Hope this will help you.
The ZeroMQ FAQ states in the Why can't I use standard I/O multiplexing functions such as select() or poll() on ZeroMQ sockets? question:
Note that there's a way to retrieve a file descriptor from ZeroMQ socket (ZMQ_FD socket option) that you can poll on from version 2.1 onwards, however, there are some serious caveats when using it. Check the documentation carefully before using this feature.
I've prototyped integrating ZeroMQ socket receiving to Qt's and custom select() based event loops, and on the first glance everything seems to work.
From the documentation I have identified two "caveats" that I handle in my code:
The ability to read from the returned file descriptor does not necessarily indicate that messages are available to be read from the socket
This I have solved by checking ZMQ_EVENTS before reading from the socket.
Events are signaled in edge-triggered fashion
This one I have solved by always receiving all the messages from the socket when the file descriptor signals.
Are there some caveats that I'm missing?
I'm trying to implement simple networking game (client - server) which uses UDP to transfer game events over network, and I have this working well, but now I would like to add to game chat over tcp in same console application. I've tried to implement multi client chat using select() and non-blocking master socket. Chat is working as standalone application but I have problems putting it together.
Basically my server loop looks like this:
while(true)
{
sendUDPdata()
...
while(true)
{
receiveUDPdata()
}
}
Problem is that when I want to add chat to server's main loop (handling UDP) like this:
while(true)
{
HandleTCPConnections();
sendUDPdata();
...
while(true)
{
receiveUDPdata();
}
}
calling select() in HandleTCPConnections() blocks whole server. Is there any way how to handle this?
There are two good ways to do this:
Use threads. Have a thread to handle your TCP sockets and a thread to handle your UDP sockets.
Use a reactor. Both the UDP code and the TCP code register their sockets with the reactor. The reactor blocks on all the sockets (typically using poll) and calls into the appropriate code when activity occurs on that socket.
There are lots of libraries out there for both of these options (such as libevent and boost.asio) that you can use if you don't want to reinvent the wheel.
select is a blocking call if there's no data available from the sockets, in your case.
Your chat can either run along with the server or in parallel with it: you've already got the first case; for the second, you'd better go for a separate thread that handles the chat. C++ has <thread>, which you may want to look into.
A separate thread is easier to implement in this case because you've a separate connection, and separate sockets therefore, that would otherwise need to be looked after for concurrent access.
Hi I am working on an assignment writing multi threaded client server.
So far I have done is open a socket in a port and forked two thread for listening and writing to client. But I need to connect two type of clients to the server and service them differently. My question is what would be my best approach?
I am handling connection in a class which has a infinite loop to accept connection. When ever a connection is accepted, the class create two thread to read and write to client? Now if I wnat to handle another client of different type, what should we do?
Do I need to open another port? or is it possible to service through same port? May be if it is possible to identify the type of client in the socket than I can handle messages differently.
Or do you suggest like this?
Fork two thread for two type of client and monitor inbound connection in each thread in different port.
when a connection accepted each thread spawn another two thread for listening and writing.
please make a suggestion.
Perhaps you'll get a better answer from a Unix user, but I'll provide what I know.
Your server needs a thread that opens a 'listening' socket that waits for incoming connections. This thread can be the main thread for simplicity, but can be an alternate thread if you are concerned about UI interaction, for example (in Windows, this would be a concern, not sure about Unix). It sounds like you are at least this far.
When the 'listening' socket accepts a connection, you get a 'connected' socket that is connected to the 'client' socket. You would pass this 'connected' socket to a new thread that manages the reading from and writing to the 'connected' socket. Thus, one change I would suggest is managing the 'connected' socket in a single thread, not two separate threads (one for reading, one for writing) as you have done. Reading and writing against the same socket can be accomplished using the select() system call, as shown here.
When a new client connects, your 'listening' socket will provide a new 'connected' socket, which you will hand off to another thread. At this point, you have two threads - one that is managing the first connection and one that is managing the second connection. As far as the sockets are concerned, there is no distinction between the clients. You simply have two open connections, one to each of your two clients.
At this point, the question becomes what does it mean to "service them differently". If the clients are expected to interact with the server in unique ways, then this has to be determined somehow. The interactions could be determined based on the 'client' socket's IP address, which you can query, but this seems arbitrary and is subject to network changes. It could also be based on the initial block of data received from the 'client' socket which indicates the type of interaction required. In this case, the thread that is managing the 'connected' socket could read the socket for the expected type of interaction and then hand the socket off to a class object that manages that interaction type.
I hope this helps.
You can handle the read-write on a single client connection in one thread. The simplest solution based on multiple-threads will be this:
// C++ like pseudo-code
while (server_running)
{
client = server.accept();
ClientHandlingThread* cth = CreateNewClientHandlingThread(client);
cth->start();
}
class ClientHandlingThread
{
void start()
{
std::string header = client->read_protocol_header();
// We get a specific implementation of the ProtocolHandler abstract class
// from a factory, which create objects by inspecting some protocol header info.
ProtocolHandler* handler = ProtocolHandlerFactory.create(header);
if (handler)
handler->read_write(client);
else
log("unknown protocol")
}
};
To scale better, you can use a thread pool, instead of spawning a new thread for each client. There are many free thread pool implementations for C++.
while (server_running)
{
client = server.accept();
thread_pool->submit(client);
cth->start();
}
The server could be improved further by using some framework that implements the reactor pattern. They use select or poll functions under the hood. You can use these functions directly. But for a production system it is better to use an existing reactor framework. ACE is one of the most widely known C++ toolkits for developing highly scalable concurrent applications.
Different protocols are generally serviced on different ports. However, you could service both types of clients over the same port by negotiating the protocol to be used. This can be as simple as the client sending either HELO or EHLO to request one or another kind of service.