I'm new to Wt3(version 3.3.9 - because wole project is using it). I've met a problem and now looking for a solution.
I want to make a multithread Wt::Http::Client. From documentation I've got that using Wt::WIOService with setted thread count can do neaded, but I faced with problem of recognition which request matched to handled response.
Multithreading using Wt::WIOService.
Wt::WIOService io_service;
io_service.setThreadCount(10);
io_service.start();
//
MyClass my_http_client(io_service);
my_http_client.Work();
//
io_service.stop();
In Work() there is a loop that reading a queue of requests and sends them.
For single thread i was using next piece of code:
In class constructor extended from Wt::Http::Client:
done().connect(boost::bind(&MyClass::HandleHttpResponse, this, _1, _2));
Handle method:
void MyClass::HandleHttpResponse(boost::system::error_code err, const Wt::Http::Message response) {
std::unique_lock<std::mutex> lock(mutex_);
// response to inner format
// then all data goes to another class.
}
But when I'm using multithread I need exactly match request with response. I can be wrong in understaning Wt documentation.
Can you help me to solve this problem?
The intended use of Wt::Http::Client is to create a new instance for each request.
Related
I am not really seeking code examples, but I'm hoping someone can review my program design and provide feedback. I am trying to figure out how do I ensure I have one instance of my "workflow" running at a time.
I am working in C++.
This is my workflow:
I read rows off of a Postgres database.
If the table has any records, I want to do these instructions:
Read the records and transform them to JSON
Send the JSON document to a remote Web service
Parse the response from the service. The service tells me which records were saved or not saved, based on their primary key.
I delete the successfully saved records
I log the unsuccessful records (there's another process that consumes the logs and so my work is done).
I want to perform all of this threads using a separate thread (or "task", whatever higher-level abstraction is available in C++), and I want to make sure that if my function for [1] gets called multiple times, the additional calls basically get "dropped" if step 1 is already in flight.
In C++, I believe I can use a flag and a mutex. I use a something like std::lock_guard<std::mutex> at the top of my method. Then the next line checks for a flag.
// MyWorkflow.cpp
std::mutex myMutex;
int inFlight = 0;
void process() {
std::lock_guard<std::mutex> guard(myMutex);
if (inflight) {
return;
}
inflight = 1;
std::vector<Widget> widgets = readFromMyTable();
std::string json = getJson(&widgets);
... // Send the json to the remote service and handle the response
}
Okay, let me explain my confusion. I want to use Curl to perform the HTTP request. But Curl works asynchronously. And so if I make the asynchronous HTTP call via Curl, my update function will just return and myMutex will be released, right?
I think in my asynchronous response handler, I need to call a second function that's in MyWorkflow.cpp
void markCompletion() {
std::lock_guard<std::mutex> guard(myMutex);
inFlight = 0; // Reset the inflight flag here
}
Is this the right approach? I am worried that if an exception is thrown anywhere before I call markCompletion(), I will block all future callers. I think I need to ensure I have proper exception handling and always call markCompletion().
I am terribly sorry for asking such a noob question, but I really want to learn to do this the right way.
I currently working on a async rest client using boost::asio::io_service.
I am trying to make the client as a some kind of service for a bigger program.
The idea is that the client will execute async http requests to a rest API, independently from the thread running the main program. So inside in the client will be another thread waiting for a request to send.
To pass the requests to the client I am using a io_service and io_service::work initialized with the io_service. I almost reused the example given on this tutorial - logger_service.hpp.
My problem is that when in the example they post a work to the service, the called handler is a simple function. In my case as I am making async calls like this
(I have done the necessary to run all the instancies of the following objects and some more in a way to be able to establish the network connection):
boost::asio::io_service io_service_;
boost::asio::io_service::work work_(io_service_); //to prevent the io_service::run() to return when there is no more work to do
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket_(io_service_);
In the main program I am doing the following calls:
client.Connect();
...
client.Send();
client.Send();
...
Some client's pseudo code:
void MyClass::Send()
{
...
io_service_.post(boost::bind(&MyClass::AsyncSend, this);
...
}
void MyClass::AsyncSend()
{
...
boost::io_service::asio::async_write(socket, streamOutBuffer, boost::bind(&MyClass::handle_send, this));
...
}
void MyClass::handle_send()
{
boost::io_service::asio::async_read(socket, streamInBuffer, boost::bind(&MyClass::handle_read, this));
}
void MyClass::handle_read()
{
// ....treatment for the received data...
if(allDataIsReceived)
FireAnEvent(ReceivedData);
else
boost::io_service::asio::async_read(socket, streamInBuffer, boost::bind(&MyClass::handle_read, this));
}
As it is described in the documentation the 'post' method requests the io_service to invoke the given handler and return immediately. My question is, will be the nested handlers, for example the ::handle_send in the AsyncSend, called just after (when the http response is ready) when post() is used? Or the handlers will be called in another order different from the one defined by the order of post() calls ?
I am asking this question because when I call only once client->Send() the client seems to "work fine". But when I make 2 consecutive calls, as in the example above, the client cannot finish the first call and than goes to execute the second one and after some chaotic executions at the end the 2 operations fail.
Is there any way to do what I'm describing execute the whole async chain before the execution of another one.
I hope, I am clear enough with my description :)
hello Blacktempel,
Thank you for the given comment and the idea but however I am working on a project which demands using asynchronous calls.
In fact, as I am newbie with Boost my question and the example I gave weren't right in the part of the 'handle_read' function. I add now a few lines in the example in a way to be more clear in what situation I am (was).
In fact in many examples, may be all of them, who are treating the theme how to create an async client are very basic... All they just show how to chain the different handlers and the data treatment when the 'handle_read' is called is always something like "print some data on the screen" inside of this same read handler. Which, I think, is completely wrong when compared to real world problems!
No one will just print data and finish the execution of her program...! Usually once the data is received there is another treatment that has to start, for example FireAnEvent(). Influenced by the bad examples, I have done this 'FireAnEvent' inside the read handler, which, obviously is completely wrong! It is bad to do that because making the things like that, the "handle_read" might never exit or exit too late. If this handler does not finish, the io_service loop will not finish too. And if your further treatment demands once again to your async client to do something, this will start/restart (I am not sure about the details) the io_service loop. In my case I was doing several calls to the async client in this way. At the end I saw how the io_service was always started but never ended. Even after the whole treatment was ended, I never saw the io_service to stop.
So finally I let my async client to fill some global variable with the received data inside the handle_read and not to call directly another function like FireAnEvent. And I moved the call of this function (FireAnEvent) just after the io_service.run(). And it worked because after the end of the run() method I know that the loop is completely finished!
I hope my answer will help people :)
I currently have a very simple boost::asio server that sends a status update upon connecting (using google proto buffers):
try
{
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service,tcp::endpoint(tcp::v4(), 13));
for (;;)
{
tcp::socket socket(io_service);
acceptor.accept(socket);
...
std::stringstream message;
protoMsg.SerializeToOstream(&message);
boost::system::error_code ignored_error;
boost::asio::write(socket, boost::asio::buffer(message.str()), ignored_error);
}
}
catch (std::exception& e) { }
I would like to extend it to first read after accepting a new connection, check what request was received, and send different messages back depending on this message. I'd also like to keep the TCP connection open so the client doesn't have to re-connect, and would like to handle multiple clients (not many, maybe 2 or 3).
I had a look at a few examples on boost asio, namely the async time tcp server and the chat server, but both are a bit over my head tbh. I don't even understand whether I need an async server. I guess I could just do a read after acceptor.accept(socket), but I guess then I wouldn't keep on listening for further requests. And if I go into a loop I guess that would mean I could only handle one client. So I guess that means I have to go async? Is there a simpler example maybe that isn't 250 lines of code? Or do I just have to bite my way through those examples? Thanks
The examples you mention from the Boost.Asio documentation are actually pretty good to see how things work. You're right that at first it might look a bit difficult to understand, especially if you're new to these concepts. However, I would recommend that you start with the chat server example and get that built on your machine. This will allow you to closer look into things and start changing things in order to learn how it works. Let me guide you through a few things I find important to get started.
From your description what you want to do, it seems that the chat server gives you a good starting point as it already has similar pieces you need. Having the server asynchronous is what you want as you then quite easily can handle multiple clients with a single thread. Nothing too complicated from the start.
Simplified, asynchronous in this case means that your server works off a queue, taking a handler (task) and executes it. If there is nothing on the queue, it just waits for something to be put on the queue. In your case that means it could be a connect from a client, a new read of a message from a client or something like this. In order for this to work, each handler (the function handling the reaction to a particular event) needs to be set up.
Let me explain a bit using code from the chat server example.
In the server source file, you see the chat_server class which calls start_accept in the constructor. Here the accept handler gets set up.
void start_accept()
{
chat_session_ptr new_session(new chat_session(io_service_, room_)); // 1
acceptor_.async_accept(new_session->socket(), // 2
boost::bind(&chat_server::handle_accept, this, new_session, // 3
boost::asio::placeholders::error)); // 4
}
Line 1: A chat_session object is created which represents a session between one client and the server. A session is created for the accept (no client has connected yet).
Line 2: An asynchronous accept for the socket...
Line 3: ...bound to call chat_server::handle_accept when it happens. The session is passed along to be used by the first client which connects.
Now, if we look at the handle_accept we see that upon client connect, start is called for the session (this just starts stuff between the server and this client). Lastly a new accept is put outstanding in case other clients want to connect as well.
void handle_accept(chat_session_ptr session,
const boost::system::error_code& error)
{
if (!error)
{
session->start();
}
start_accept();
}
This is what you want to have as well. An outstanding accept for incoming connections. And if multiple clients can connect, there should always be one of these outstanding so the server can handle the accept.
How the server and the client(s) interact is all in the session and you could follow the same design and modify this to do what you want. You mention that the server needs to look at what is sent and do different things. Take a look at chat_session and the start function which was called by the server in handle_accept.
void start()
{
room_.join(shared_from_this());
boost::asio::async_read(socket_,
boost::asio::buffer(read_msg_.data(), chat_message::header_length),
boost::bind(
&chat_session::handle_read_header, shared_from_this(),
boost::asio::placeholders::error));
}
What is important here is the call to boost::asio::async_read. This is what you want too. This puts an outstanding read on the socket, so the server can read what the client sends. There is a handler (function) which is bound to this event chat_session::handle_read_header. This will be called whenever the server reads something on the socket. In this handler function you could start putting your specific code to determine what to do if a specific message is sent and so on.
What is important to know is that whenever calling these asynchronous boost::asio functions things will not happen within that call (i.e. the socket is not read if you call the function read). This is the asynchronous aspect. You just kind of register a handler for something and your code is called back when this happens. Hence, when this read is called it will immediately return and you're back in the handle_accept for the server (if you follow how things get called). And if you remember there we also call start_accept to set up another asynchronous accept. At this point you have two outstanding handlers waiting for either another client to connect or the first client to send something. Depending on what happens first, that specific handler will be called.
Also what is important to understand is that whenever something is run, it will run uninterrupted until everything it needs to do has been done. Other handlers have to wait even if there is are outstanding events which trigger them.
Finally, in order to run the server you'll need the io_service which is a central concept in Asio.
io_service.run();
This is one line you see in the main function. This just says that the thread (only one in the example) should run the io_service, which is the queue where handlers get enqueued when there is work to be done. When nothing, the io_service just waits (blocking the main thread there of course).
I hope this helps you get started with what you want to do. There is a lot of stuff you can do and things to learn. I find it a great piece of software! Good luck!
In case anyone else wants to do this, here is the minimum to get above going: (similar to the tutorials, but a bit shorter and a bit different)
class Session : public boost::enable_shared_from_this<Session>
{
tcp::socket socket;
char buf[1000];
public:
Session(boost::asio::io_service& io_service)
: socket(io_service) { }
tcp::socket& SocketRef() { return socket; }
void Read() {
boost::asio::async_read( socket,boost::asio::buffer(buf),boost::asio::transfer_at_least(1),boost::bind(&Session::Handle_Read,shared_from_this(),boost::asio::placeholders::error));
}
void Handle_Read(const boost::system::error_code& error) {
if (!error)
{
//read from buffer and handle requests
//if you want to write sth, you can do it sync. here: e.g. boost::asio::write(socket, ..., ignored_error);
Read();
}
}
};
typedef boost::shared_ptr<Session> SessionPtr;
class Server
{
boost::asio::io_service io_service;
tcp::acceptor acceptor;
public:
Server() : acceptor(io_service,tcp::endpoint(tcp::v4(), 13)) { }
~Server() { }
void operator()() { StartAccept(); io_service.run(); }
void StartAccept() {
SessionPtr session_ptr(new Session(io_service));
acceptor.async_accept(session_ptr->SocketRef(),boost::bind(&Server::HandleAccept,this,session_ptr,boost::asio::placeholders::error));
}
void HandleAccept(SessionPtr session,const boost::system::error_code& error) {
if (!error)
session->Read();
StartAccept();
}
};
From what I gathered through trial and error and reading: I kick it off in the operator()() so you can have it run in the background in an additional thread. You run one Server instance. To handle multiple clients, you need an extra class, I called this a session class. For asio to clean up dead sessions, you need a shared pointer as pointed out above. Otherwise the code should get you started.
I'm using Boost.Asio for network operations, they have to (and actually, can, there's no complex data structures or anything) remain pretty low level since I can't afford the luxury of serialization overhead (and the libs I found that did offer well enough performance seemed to be badly suited for my case).
The problem is with an async write I'm doing from the client (in QT, but that should probably be irrelevant here). The callback specified in the async_write doesn't get called, ever, and I'm at a complete loss as to why. The code is:
void SpikingMatrixClient::addMatrix() {
std::cout << "entered add matrix" << std::endl;
int action = protocol::Actions::AddMatrix;
int matrixSize = this->ui->editNetworkSize->text().toInt();
std::ostream out(&buf);
out.write(reinterpret_cast<const char*>(&action), sizeof(action));
out.write(reinterpret_cast<const char*>(&matrixSize), sizeof(matrixSize));
boost::asio::async_write(*connection.socket(), buf.data(),
boost::bind(&SpikingMatrixClient::onAddMatrix, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
}
which calls the first write. The callback is
void SpikingMatrixClient::onAddMatrix(const boost::system::error_code& error, size_t bytes_transferred) {
std::cout << "entered onAddMatrix" << std::endl;
if (!error) {
buf.consume(bytes_transferred);
requestMatrixList();
} else {
QString message = QString::fromStdString(error.message());
this->ui->statusBar->showMessage(message, 15000);
}
}
The callback never gets called, even though the server receives all the data. Can anyone think of any reason why it might be doing that?
P.S. There was a wrapper for that connection, and yes there will probably be one again. Ditched it a day or two ago because I couldn't find the problem with this callback.
As suggested, posting a solution I found to be the most suitable (at least for now).
The client application is [being] written in QT, and I need the IO to be async. For the most part, the client receives calculation data from the server application and has to render various graphical representations of them.
Now, there's some key aspects to consider:
The GUI has to be responsive, it should not be blocked by the IO.
The client can be connected / disconnected.
The traffic is pretty intense, data gets sent / refreshed to the client every few secs and it has to remain responsive (as per item 1.).
As per the Boost.Asio documentation,
Multiple threads may call io_service::run() to set up a pool of
threads from which completion handlers may be invoked.
Note that all threads that have joined an io_service's pool are considered equivalent, and the io_service may distribute work across them in an arbitrary fashion.
Note that io_service.run() blocks until the io_service runs out of work.
With this in mind, the clear solution is to run io_service.run() from another thread. The relevant code snippets are
void SpikingMatrixClient::connect() {
Ui::ConnectDialog ui;
QDialog *dialog = new QDialog;
ui.setupUi(dialog);
if (dialog->exec()) {
QString host = ui.lineEditHost->text();
QString port = ui.lineEditPort->text();
connection = TcpConnection::create(io);
boost::system::error_code error = connection->connect(host, port);
if (!error) {
io = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
work = boost::shared_ptr<boost::asio::io_service::work>(new boost::asio::io_service::work(*io));
io_threads.create_thread(boost::bind(&SpikingMatrixClient::runIo, this, io));
}
QString message = QString::fromStdString(error.message());
this->ui->statusBar->showMessage(message, 15000);
}
}
for connecting & starting IO, where:
work is a private boost::shared_ptr to the boost::asio::io_service::work object it was passed,
io is a private boost::shared_ptr to a boost::asio::io_service,
connection is a boost::shared_ptr to my connection wrapper class, and the connect() call uses a resolver etc. to connect the socket, there's plenty examples of that around
and io_threads is a private boost::thread_group.
Surely it could be shortened with some typedefs if needed.
TcpConnection is my own connection wrapper implementation, which sortof lacks functionality for now, and I suppose I could move the whole thread thing into it when it gets reinstated. This snippet should be enough to get the idea anyway...
The disconnecting part goes like this:
void SpikingMatrixClient::disconnect() {
work.reset();
io_threads.join_all();
boost::system::error_code error = connection->disconnect();
if (!error) {
connection.reset();
}
QString message = QString::fromStdString(error.message());
this->ui->statusBar->showMessage(message, 15000);
}
the work object is destroyed, so that the io_service can run out of work eventually,
the threads are joined, meaning that all work gets finished before disconnecting, thus data shouldn't get corrupted,
the disconnect() calls shutdown() and close() on the socket behind the scenes, and if there's no error, destroys the connection pointer.
Note, that there's no error handling in case of an error while disconnecting in this snippet, but it could very well be done, either by checking the error code (which seems more C-like), or throwing from the disconnect() if the error code within it represents an error after trying to disconnect.
I encountered a similar problem (callbacks not fired) but the circumstances are different from this question (io_service had jobs but still would not fire the handlers ). I will post this anyway and maybe it will help someone.
In my program, I set up an async_connect() then followed by io_service.run(), which blocks as expected.
async_connect() goes to on_connect_handler() as expected, which in turn fires async_write().
on_write_complete_handler() does not fire, even though the other end of the connection has received all the data and has even sent back a response.
I discovered that it is caused by me placing program logic in on_connect_handler(). Specifically, after the connection was established and after I called async_write(), I entered an infinite loop to perform arbitrary logic, not allowing on_connect_handler() to exit. I assume this causes the io_service to not be able to execute other handlers, even if their conditions are met because it is stuck here. ( I had many misconceptions, and thought that io_service would automagically spawn threads for each async_x() call )
Hope that helps.
I am building an HTTP client based on the example on HTTP server given at boost website. Now, the difference between that code and mine is that the example uses the server constructor to start the asynchronous operations. This makes sense since a server is supposed to listen all the time. In my client, on the other hand, I want to first construct the object and then have a send() function that starts off by connecting to the endpoint and later on sends a request and finally listens for the reply. This makes sense too, doesn't it?
When I create my object (client) I do it in the same manner as in the server example (winmain.cpp). It looks like this:
client c("www.boost.org);
c.start(); // starts the io_service in a thread
c.send(msg_);
The relevant parts of the code are these:
void enabler::send(common::geomessage& msg_)
{
new_connection_.reset(new connection(io_service_,
connection_manager_,
message_manager_, msg_
));
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(host_address, "http");
resolver.async_resolve(query, boost::bind(
&enabler::handle_resolve,
boost::ref(*this),
boost::asio::placeholders::error,
boost::asio::placeholders::iterator
));
}
void enabler::run()
{
io_service_.run();
}
The problem with this is that the program gets stuck somewhere here. The last thing that prints is the "Resolving host", after that the program ends. I don't know why because the io_service should block until all async operations have returned to their callbacks. If, however, I change the order of how I call the functions, it works. If I call run() just after the call to async_resolve() and also omit calling start() in my main program, it works!
In this scenario, io_service blocks as it should and I can see that I get a response from the server.
It has something to do from the fact that I call run() from inside the same class as where I call async_resolve(). Could this be true? The I suppose I need to give a reference from the main program when I call run(), is it like that?
I have struggled with getting io_service::work to work but the program just gets stuck and yeah, similar problems as the one above occur. So it does not really help.
So, what can I do to get this right? As I said earlier, what I want is to be able to create the client object and have the io_service running all the time in a separate thread inside the client class. Secondly to have a function, send(), that sends requests to the server.
You need to start at least some work before calling run(), as it returns when there is no more work to do.
If you call it before you start the async resolve, it won't have any work so it returns.
If you don't expect to have some work at all times, to keep the io_service busy, you should construct an io_service::work object in some scope which can be exited without io_service::run() having to return first. If you're running the io_service in a separate thread, I would imagine you wouldn't have a problem with that.
It's sort of hard to know what you're trying to do with those snippets of code. I imagine that you'd want to do something along these lines:
struct client
{
io_service io_service_;
io_service::work* w_;
pthread_t main_thread_;
client(): w_(new io_service::work(io_service)) { ... }
void start() { pthread_create(&main_thread_, 0, main_thread, this); }
static long main_thread(void* arg) { ((client*)arg)->io_service_.run(); }
// release the io_service and allow run() to return
void stop() { delete w_; w_ = 0; pthread_join(main_thread_); }
};