In the unary RPC example provided in the grpc Github (client) and (server), is there any way to detect client's closed connection?
For example, in server.cc file:
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
int p = 0,i=0;
while(i++ < 1000000000) { // some dummy work
p = p + 10;
}
responder_.Finish(reply_, Status::OK, this);
With this dummy task before sending the response back to the client, server will take a few seconds. If we close the client (for example say with Ctrl+C), the server does not throw any error. It simply calls Finish and then deallocates the object as if the Finish is successful.
Is there any async feature (handler function) on the server-side to get us notified that the client has closed the connection or client is terminated?
Thank You!
Unfortunately, no.
But now guys from gRPC team works hard to implement callback mechanism into C++ implementation. As I understand it will work the same way as on Java implementation( https://youtu.be/5tmPvSe7xXQ?t=1843 ).
You can see how to work with future API with next examples: client_callback.cc and server_callback.cc
And the point of your interest there is ServerBidiReactor class from ::grpc::experimental namespace for server side. It have OnDone and OnCancel notification methods that maybe can help you.
Another interesting point there is that you can store a pointers to connection object and send notifications to client at any time.
But it still have many issue and I don't recommend to use this API in production code.
Current progress of C++ callbacks implementation you can see there: https://github.com/grpc/grpc/projects/12#card-12554506
Related
I am using libwidevinecdm.so from chrome to handle DRM protected data. I am currently successfully setting the widevine server certificate I get from the license server. I can also create a session with the pssh box of the media im trying to decode. So far everything is successful (all promises resolve fine).
(session is created like this: _cdm->CreateSessionAndGenerateRequest(promise_id, cdm::SessionType::kTemporary, cdm::InitDataType::kCenc, pssh_box.data(), static_cast<uint32_t>(pssh_box.size()));)
I am then getting a session message of type kLicenseRequest which I am forwarding to the respective license server. The license server responds with a valid response and the same amount of data as I can see in the browser when using Chrome. I am then passing this to my session like this:
_cdm->UpdateSession(promise_id, session_id.data(), static_cast<uint32_t>(session_id.size()),
license_response.data(), static_cast<uint32_t>(license_response.size()));
The problem now is that this promise never resolves. It keeps posting the kLicenseRequest message over and over again to my session without ever returning. Does this mean my response is wrong? Or is this something else?
Br
Yanick
The issue is caused by the fact, that everything in CreateSessionAndGenerateRequest is done synchronous - that means by the time CreateSessionAndGenerateRequest returns your promise will always be resolved.
The CDM will emit the kLicenseRequest inside CreateSessionAndGenerateRequest and it doesn't do so in a "fire & forget" fashion, but the function waits there until you have returned from the cdm::Host_10::OnSessionMessage. Since my implementation of OnSessionMessage was creating a synchronous HTTP Request to the license server before - also synchronously - calling the UpdateSession the entire chain ended up to be blocking.
So ultimately I was calling UpdateSession while still being inside CreateSessionAndGenerateRequest and I assume the CDM cannot handle this and reacts by creating a new session with the given ID and generating a request again, which of course triggered another UpdateSession and so on.
Ultimately the simplest way to break the cycle was to make something asynchronous. I decided to launch a separate thread when receiving kLicenseRequest, wait for a few milliseconds to make sure that CreateSessionAndGenerateRequest has time to finish (not sure if that is really required) and then issue the request to the license server.
The only change I had to do was adding the surrounding std::thread:
void WidevineSession::forward_license_request(const std::vector<uint8_t> &data) {
std::thread{
[=]() {
std::this_thread::sleep_for(std::chrono::milliseconds{100});
net::HttpRequest request{"POST", _license_server_url};
request.add_header("Authorization", fmt::format("Bearer {}", _access_token))
.byte_body(data);
const auto response = _client.execute(request);
if (response.status_code() != 200) {
log->error("Widevine license request not accepted by license server: {} {} ({})", response.status_code(), response.status_text(), utils::bytes_to_utf8(response.body()));
throw std::runtime_error{"Error requesting widevine license"};
}
log->info("Successfully requested widevine license from license server");
_adapter->update_session(this, _session_id, response.body());
}
}.detach();
}
I am developing a server and client project with Qt. I want to print the connection status for the users. For this propose I use state() like:
socketState = mySocket.state();
if (socketState == 3) {
Print("we have connected");
}
However, it does not work when the server queues new connections. To make it clear, my client state is 3 even if the server has paused accepting new connections:
//server side:
myServer->pauseAccepting();
//client side:
connectToHost()
socketState = mySocket.state();
Now the socketState is 3 instead of 0 or a special number for queue state.
To sum it up, I want to know how to inform the client that it is in the queue? Is there anything like state() that has a return value for queue state?
Finally I could find the answer.
The refused client that goes to the queue (by OS) is not different with other clients. Therefore, we can make a special socket for it (which means it is NOT in queue anymore) and start communication (inform and close).
for example:
QTcpSocket clientSocket
QTcpSocket queueSocket
In my project, the server first send a message to the queue client.
The message is:
"We can NOT accept a new client because we can NOT handle more than one". So, it knows what is the problem.
Then, the server closes the queue client socket.
We SHOULD close it because we do NOT want to handle lots of clients.
However, the main point is that we can work with clients in the queue and decide how to deal with them . I prefer to accept them just to inform and close them.
if (we have NOT a client) {
work with clientSocket
}
else {
queueSocket.wirte("We can NOT accept a new client because we can NOT handle more than one")
close queueSocket
}
I hope it helps those who want to inform rejected clients.
I have multiple processes working together as a system. One of the processes acts as main process. When the system is shutting down, every process need to send a notification (via RabbitMQ) to the main process and then exit. The program is written in C++ and I am using AMQPCPP library.
The problem is that sometimes the notification is not published successfully. I suspect exiting too soon is the cause of the problem as AMQPCPP library has no chance to send the message out before closing its connection.
The documentation of AMQPCPP says:
Published messages are normally not confirmed by the server, and the RabbitMQ will not send a report back to inform you whether the message was succesfully published or not. Therefore the publish method does not return a Deferred object.
As long as no error is reported via the Channel::onError() method, you can safely assume that your messages were delivered.
This can of course be a problem when you are publishing many messages. If you get an error halfway through there is no way to know for sure how many messages made it to the broker and how many should be republished. If this is important, you can wrap the publish commands inside a transaction. In this case, if an error occurs, the transaction is automatically rolled back by RabbitMQ and none of the messages are actually published.
Without a confirmation from RabbitMQ server, it's hard to decide when it is safe to exit the process. Furthermore, using transaction sounds like overkill for a notification.
Could anyone suggest a simple solution for a graceful shutting down without losing the last notification?
It turns out that I can setup a callback when closing the channel. So that I can safely close connection when all channels are closed successfully. I am not entirely sure if this process ensures all outgoing messages are really published. However from the test result, it seems that the problem is solved.
class MyClass
{
...
AMQP::TcpConnection m_tcpConnection;
AMQP::TcpChannel m_channelA;
AMQP::TcpChannel m_channelB;
...
};
void MyClass::stop(void)
{
sendTerminateNotification();
int remainChannel = 2;
auto closeConnection = [&]() {
--remainChannel;
if (remainChannel == 0) {
// close connection when all channels are closed.
m_tcpConnection.close();
ev::get_default_loop().break_loop();
}
};
auto closeChannel = [&](AMQP::TcpChannel & channel) {
channel.close()
.onSuccess([&](void) { closeConnection(); })
.onError([&](const char * msg)
{
std::cout << "cannot close channel: "
<< msg << std::endl;
// close the connection anyway
closeConnection();
}
);
closeChannel(m_channelA);
closeChannel(m_channelB);
}
I am facing a problem using the Poco::HTTPServer. As descibed in the doc of TCPServer:
After calling stop(), no new connections will be accepted and all
queued connections will be discarded. Already served connections,
however, will continue being served.
Every connection is executed in its own thread.
Although it seems the destructor is succesfully called the connection-thread still exists and serves connections, which leads to segmentation faults.
I want to cancel all connections. Therefore I use Poco::ThreadPool::defaultPool().stopAll(); in the destructor of my server class, which leads to the behaviour also described in the docs of ThreadPool (It takes 10 seconds and objects are not deleted):
If a thread fails to stop within 10 seconds (due to a programming
error, for example), the underlying thread object will not be deleted
and this method will return anyway. This allows for a more or less
graceful shutdown in case of a misbehaving thread.
My question is: How do I accomplish the more graceful way? Is the programming error within the Poco-library?
EDIT: I am using GNU/Linux (Ubuntu 10.04) with eclipse + cdt as IDE, target system is embedded Linux (Kernel 2.6.9). On both systems I experienced the described behaviour.
The application I am working on shall be configured via web-interface. So the server sends an event (on upload of new configuration) to main to restart.
Here's the outline:
main{
while (true){
server = new Server(...);
server->start();
// wait for termination request
server->stop();
delete server;
}
}
class Server{
Poco:HTTPServer m_Server;
Server(...):
m_Server(requestHandlerFactory, socket, params);
{
}
~Server(){
[...]
Poco::ThreadPool::defaultPool().stopAll(); // This takes 10 seconds!
// without the above line I get segmentation faults,
// because connections are still being served.
}
start() { m_Server.start(); }
stop() { m_Server.stop(); }
}
This is actually a bug in the implementation of the stopAll() method. The listening socket is being shut down after closing the currently active connections, which allows the server to accept new connections in between, which in turn will not be closed and keep running. A workaround is to call HTTPServer::stop() and then HTTPServer::stopAll(). I reported the bug upstream including a proposed fix:
https://github.com/pocoproject/poco/issues/436
You should avoid using Poco::ThreadPool::defaultPool().stopAll(); since it doesn't provide you control on which threads are stopped.
I suggest you to create a Poco::ThreadPool specifically for you Poco:HTTPServer instance and stops the threads of this pool when your server is stopped.
With this, your code should look like this:
class Server{
Poco:HTTPServer m_Server;
Poco::ThreadPool m_threadPool;
Server(...)
: m_Server(requestHandlerFactory, m_threadPool, socket, params);
{
}
~Server(){
}
start() { m_Server.start(); }
stop() {
m_Server.stop();
m_threadPool.stopAll(); // Stop and wait serving threads
}
};
This answer may be too late for the poster, but since the question helped me to solve my issue, I think it is good to post a solution here !
I have been using this code to display IMAP4 messages:
void DisplayMessageL( const TMsvId &aId )
{
// 1. construct the client MTM
TMsvEntry indexEntry;
TMsvId serviceId;
User::LeaveIfError( iMsvSession->GetEntry(aId, serviceId, indexEntry));
CBaseMtm* mtm = iClientReg->NewMtmL(indexEntry.iMtm);
CleanupStack::PushL(mtm);
// 2. construct the user interface MTM
CBaseMtmUi* uiMtm = iUiReg->NewMtmUiL(*mtm);
CleanupStack::PushL(uiMtm);
// 3. display the message
uiMtm->BaseMtm().SwitchCurrentEntryL(indexEntry.Id());
CMsvOperationWait* waiter=CMsvOperationWait::NewLC();
waiter->Start(); //we use synchronous waiter
CMsvOperation* op = uiMtm->OpenL(waiter->iStatus);
CleanupStack::PushL(op);
CActiveScheduler::Start();
// 4. cleanup for example even members
CleanupStack::PopAndDestroy(4); // op,waiter, mtm, uimtm
}
However, in case when user attempts to download a remote message (i.e. one of the emails previously not retrieved from the mail server), and then cancels the request, my code remains blocked, and it never receives information that the action was canceled.
My question is:
what is the workaround for the above, so the application is not stuck?
can anyone provide a working example for asynchronous call for opening remote messages which do not panic and crash the application?
Asynchronous calls for POP3, SMTP and local IMAP4 messages work perfectly, but remote IMAP4 messages create this issue.
I am testing these examples for S60 5th edition.
Thank you all in advance.
First of all, I would retry removing CMsvOperationWait and deal with the open request asynchronously - i.e. have an active object waiting for the CMsvOperation to complete.
CMsvOperationWait is nothing more than a convenience to make an asynch operation appear synchronous and my suspicion is that this is culprit - in the case of download->show message, there are two asynch operations chained.