I am trying to build an asynchronous gRPC C++ client that sends/receives streaming messages to/from server using the ClientAsyncReaderWriter instance. The client and the server send messages to each other whenever they want. How can I check if there is any message from the server?
The ClientAsyncReaderWriter instance has a binded completion queue. I tried to check the completion queue by calling Next() and AsyncNext() functions to see if there is any event that would indicate that there is some message from the server. However, the completion queue has no events even if there is a message from the server.
class AsyncClient {
public:
AsyncClient(std::shared_ptr<grpc::Channel> channel) :
stub_(MyService::NewStub(channel)),
stream(stub_->AsyncStreamingRPC(&context, &cq, (void *)1))
{}
~AsyncClient()
{}
void receiveFromServer() {
StreamingResponse response;
// 1. Check if there is any message
// 2. Read the message
}
private:
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::shared_ptr<MyService::Stub> stub_;
std::shared_ptr<grpc::ClientAsyncReaderWriter<StreamingRequest, StreamingResponse>> stream;
};
I need to implement steps 1 and 2 in the receiveFromServer() function.
Fortunately, I found a solution to my problem and the asynchronous bi-directional streaming in my project works as expected now.
It turned out that my understanding of the "completion queue" concept was incorrect.
This example was a great help for me!
Related
I am writing an application to consume messages from queue. I am able to successfully bind the sqs and receive the messages. However, when I want to requeue the message, I am using as follows.
message.getHeaders().get(AwsHeaders.ACKNOWLEDGMENT, QueueMessageAcknowledgment.class)
.acknowledge();
I also use to requeue
StaticMessageHeaderAccessor.getAcknowledgmentCallback(message).acknowledge(AcknowledgmentCallback.Status.REQUEUE);
But it is not successful.
I also tried PollableMessage but unclear of how to implement it.
https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_overview_2
I've a Consumer like this
public class DefaultChannel implements Channel, Consumer<Message<String>> {
#Override
public void accept(Message<String> message) {
if("success".equals(message.getPayLoad()){
message.getHeaders().get(AwsHeaders.ACKNOWLEDGMENT, QueueMessageAcknowledgment.class)
.acknowledge();
}else{
StaticMessageHeaderAccessor.getAcknowledgmentCallback(message).acknowledge(AcknowledgmentCallback.Status.REQUEUE);
}
}
}
I was able to re-queue succesfully messageDeletionPolicy: ON_SUCCESS properties and throwing Exception from the code.
I am trying to design Websocket Server using POCO libraries. I have implemented a simple class WebSocketRequestHandler: public HTTPRequestHandler which accepts connection and perform tasks and so on
The code snippet is below:
class WebSocketRequestHandler: public HTTPRequestHandler
// Handle a WebSocket connection.
{
public:
WebSocketRequestHandler()
{
}
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
Application& app = Application::instance();
try
{
std::cout << "Waiting for connection.." << std::endl;
WebSocket ws(request, response);
ws.setReceiveTimeout(Poco::Timespan(10, 0, 0, 0, 0));
app.logger().information("WebSocket connection established!");
int flags;
int n;
do
{
std::cout << "Waiting for incoming frame" << std::endl;
n = ws.receiveFrame(buffer, sizeof(buffer), flags);
std::cout << "Frame received" << std::endl;
//Parse the frame sequentially and so on..
}
}
I don't want sequential operation for the server. It should be async. I tried finding in POCO resources but couldn't find anything related to it. So does POCO provide async api? Like pushing the incoming messages in a queue and having a separate thread handle the messages separately while the main thread keeps on receiving messages from clients?
Or May be Something like boost asio api, registering function via async_handle(socket, func*) and handling each message in a separate thread?
Or any other better solution?
There is SocketReactor for TCP, but there is no async HTTPServer, every HTTP requests is handled in its own thread, which is obtained from ThreadPool. While this works fine for a low request frequency, it will not scale. To scale it, you can use NotificationQueue for async notifications, or PollSet to react to sockets readable/writeable states - just make sure the queue or pollset are long-lived; handlers are short lived and created for every request, so you should create your processing facility in the HTTP request factory.
Note that PollSet has a windows bug, which was fixed for the next release.
EDIT: to clarify further, strictly speaking, HTTPServer is not sequential, because every handler uses a (pre-created) thread; it is best to do as little work as possible in the handling function and "offload" the bulk of the workload elsewhere.
I have multiple processes working together as a system. One of the processes acts as main process. When the system is shutting down, every process need to send a notification (via RabbitMQ) to the main process and then exit. The program is written in C++ and I am using AMQPCPP library.
The problem is that sometimes the notification is not published successfully. I suspect exiting too soon is the cause of the problem as AMQPCPP library has no chance to send the message out before closing its connection.
The documentation of AMQPCPP says:
Published messages are normally not confirmed by the server, and the RabbitMQ will not send a report back to inform you whether the message was succesfully published or not. Therefore the publish method does not return a Deferred object.
As long as no error is reported via the Channel::onError() method, you can safely assume that your messages were delivered.
This can of course be a problem when you are publishing many messages. If you get an error halfway through there is no way to know for sure how many messages made it to the broker and how many should be republished. If this is important, you can wrap the publish commands inside a transaction. In this case, if an error occurs, the transaction is automatically rolled back by RabbitMQ and none of the messages are actually published.
Without a confirmation from RabbitMQ server, it's hard to decide when it is safe to exit the process. Furthermore, using transaction sounds like overkill for a notification.
Could anyone suggest a simple solution for a graceful shutting down without losing the last notification?
It turns out that I can setup a callback when closing the channel. So that I can safely close connection when all channels are closed successfully. I am not entirely sure if this process ensures all outgoing messages are really published. However from the test result, it seems that the problem is solved.
class MyClass
{
...
AMQP::TcpConnection m_tcpConnection;
AMQP::TcpChannel m_channelA;
AMQP::TcpChannel m_channelB;
...
};
void MyClass::stop(void)
{
sendTerminateNotification();
int remainChannel = 2;
auto closeConnection = [&]() {
--remainChannel;
if (remainChannel == 0) {
// close connection when all channels are closed.
m_tcpConnection.close();
ev::get_default_loop().break_loop();
}
};
auto closeChannel = [&](AMQP::TcpChannel & channel) {
channel.close()
.onSuccess([&](void) { closeConnection(); })
.onError([&](const char * msg)
{
std::cout << "cannot close channel: "
<< msg << std::endl;
// close the connection anyway
closeConnection();
}
);
closeChannel(m_channelA);
closeChannel(m_channelB);
}
I am using activemq-cpp 3.7.0 with VS 2010 to build a client, the server is ActiveMQ 5.8. I have created a message consumer using code similar to the following, based on the CMS configurations mentioned here. ConnClass is a ExceptionListener and a MessageListener. I only want to consume a single message before calling cms::Session::commit().
void ConnClass::setup()
{
// Create a ConnectionFactory
std::tr1::shared_ptr<ConnectionFactory> connectionFactory(
ConnectionFactory::createCMSConnectionFactory(
"tcp://localhost:61616?cms.PrefetchPolicy.queuePrefetch=1");
// Create a Connection
m_connection = std::tr1::shared_ptr<cms::Connection>(
connectionFactory->createConnection());
m_connection->start();
m_connection->setExceptionListener(this);
// Create a Session
m_session = std::tr1::shared_ptr<cms::Session>(
m_connection->createSession(Session::SESSION_TRANSACTED));
// Create the destination (Queue)
m_destination = std::tr1::shared_ptr<cms::Destination>(
m_session->createQueue("myqueue?consumer.prefetchSize=1"));
// Create a MessageConsumer from the Session to the Queue
m_consumer = std::tr1::shared_ptr<cms::MessageConsumer>(
m_session->createConsumer( m_destination.get() ));
m_consumer->setMessageListener( this );
}
void ConnClass::onMessage( const Message* message )
{
// read message code ...
// schedule a processing event for
// another thread that calls m_session->commit() when done
}
The problem is I am receiving multiple messages instead of one message before calling m_session->commit() -- I know this because the commit() call is triggered by user input. How can I ensure onMessage() is only called once before each call to commit()?
It doesn't work that way. When using async consumers the messages are delivered as fast as the onMessage method completes. If you want to consume one and only one message then use a sync receive call.
For an async consumer the prefetch allows the broker to buffer up work on the client instead of firing one at a time so you can generally get better proformance, in your case as the async onMessage call completes an ack is sent back to the broker an the next message is sent to the client.
Yes, I find this too. However, when I use the Destination URI option ( "consumer.prefetchSize=15" , http://activemq.apache.org/cms/configuring.html#Configuring-DestinationURIParameters ) for the asynchronous consumer, It works well.
BTW, I just use the latest ActiveMQ-CPP v3.9.4 by Tim , and ActiveMQ v5.12.1 on CentOS 7.
Thanks!
I have been using this code to display IMAP4 messages:
void DisplayMessageL( const TMsvId &aId )
{
// 1. construct the client MTM
TMsvEntry indexEntry;
TMsvId serviceId;
User::LeaveIfError( iMsvSession->GetEntry(aId, serviceId, indexEntry));
CBaseMtm* mtm = iClientReg->NewMtmL(indexEntry.iMtm);
CleanupStack::PushL(mtm);
// 2. construct the user interface MTM
CBaseMtmUi* uiMtm = iUiReg->NewMtmUiL(*mtm);
CleanupStack::PushL(uiMtm);
// 3. display the message
uiMtm->BaseMtm().SwitchCurrentEntryL(indexEntry.Id());
CMsvOperationWait* waiter=CMsvOperationWait::NewLC();
waiter->Start(); //we use synchronous waiter
CMsvOperation* op = uiMtm->OpenL(waiter->iStatus);
CleanupStack::PushL(op);
CActiveScheduler::Start();
// 4. cleanup for example even members
CleanupStack::PopAndDestroy(4); // op,waiter, mtm, uimtm
}
However, in case when user attempts to download a remote message (i.e. one of the emails previously not retrieved from the mail server), and then cancels the request, my code remains blocked, and it never receives information that the action was canceled.
My question is:
what is the workaround for the above, so the application is not stuck?
can anyone provide a working example for asynchronous call for opening remote messages which do not panic and crash the application?
Asynchronous calls for POP3, SMTP and local IMAP4 messages work perfectly, but remote IMAP4 messages create this issue.
I am testing these examples for S60 5th edition.
Thank you all in advance.
First of all, I would retry removing CMsvOperationWait and deal with the open request asynchronously - i.e. have an active object waiting for the CMsvOperation to complete.
CMsvOperationWait is nothing more than a convenience to make an asynch operation appear synchronous and my suspicion is that this is culprit - in the case of download->show message, there are two asynch operations chained.