ActiveMQCPP connection.start() hangs up - c++

I'm using ActiveMQ CPP 5.2.3 if it matters.
I have JMS producer that connects using failover transport to JMS network of brokers.
When I call connection->start() it hangs up (see AMQ-2114).
If I skip connection start() and call connection->createSession(), than this call is blocked too.
The requirement is that my application will try forever to connect to broker(s).
Any suggestions/workarounds?
NOTE:
This is not duplicate of here, since I'm talking about C++ and such solutions as embedded broker, spring are not available in C++.

This is normal when the connection is awaiting a transport to connect to the broker. The start method must send the client's id info to the broker before any other operation, so if no connection is present it must block. You can set some options on the failover transport like the startupMaxReconnectAttempts option to control how long it will try to connect before reporting a failure. See the URI configuration page:
http://activemq.apache.org/cms/configuring.html

Related

implementation of ping/pong for tornado websocket

I have a websocket client in python implemented using tornado.websocket.
WebSocketClientConnection
which connects to a server at remote end and communicate over websocket. Earlier I had implemented the ping/pong like feedback mechanism at application layer to ensure if the remote endpoint is still responsive.
I just recently updated my tornado package and I came across the ping_interval in WebSocketClientConnection. I removed the old ping/pong mechanism at application layer and added this ping_interval in my implementation.
After this updates the websocket is getting closed after the mentioned ping_interval timeout. The server at remote end handles the ping at transport layer and respond accrodingly.
currently I have not implemented the ping method so should I have to implement ping method for WebSocketClientConnection?,
should I have to send any data in ping method?
do I have to implement any method to handle the response send by remote server for the ping request?
No, It's implemented by default.
You may but don't have to.
I assume that by response you've ment pong. If you're using ping_interval you don't have to process pong, but if you're sending pings manually you have to control timeouts by yourself so you have to process pongs by implementing tornado.websocket.WebSocketClientConnection.on_pong method.

Keep gRPC client in listen mode for message from server

I have a gRPC server written in C++ which is running on a server say Gabroo
Gabroo:~/grpc/examples/cpp/stream_server$ ./stream_server
DB parsed, loaded 1 features.
Server listening on 0.0.0.0:50051
The client is running on same server and exits after receiving the message.
Gabroo:~/grpc/examples/cpp/stream_server$ ./stream_client
DB parsed, loaded 1 features.
-------------- GetFeature --------------
Found feature called PatriotsPath,Mendham,NJ07945,USA at 40.7838, -74.6144
Found no feature at 0, 0
Now if the server wants to send a message to client but client is not listening for any message is there some configuration needed so that client is in listen mode continuously for stream messages from server.
If it is not available inbuilt would infinite loop and checking for message every 1 secs be a good approach. I personally don't like this approach.
Regards !!!
This can be solved using RPCs of different arity. Most generally, you could define a bi directional stream between client and server. That way, if a stream is open, the client will be listening, ready to receive messages from the server.
If you use case is more specific, and you only need on client RPC per stream, then you could consider using Server streaming RPC.

Apache Thrift: Terminate Connection from the Server

I am using thrift to provide an interface between a device and a management console. It is possible for there to be up to 4 active connections to the device at one time, and I have this working using a TThreadPool server.
The issue arises around client disconnections; If a client disconnects correctly, there is no issue, however if one does not (i.e. the client crashes out or doesn't call client->close()) then the server seems to keep that clients thread alive. This means that when the next connection attempt is made, the client hangs, as the server has used up its allocated thread pool so cannot service the new request.
I haven't been able to find any standard, public mechanism by which the server can stop, and hence free up, a clients thread if that client has not used the interface for a set time period?
Is there a standard way to facilitate this in thrift?
Set the receive/send timeout on the server socket might help. Server will close the connection on timeout.
https://github.com/apache/thrift/blob/129f332d72facda5d06f87e2b4e5e08bea0b6b44/lib/cpp/src/thrift/transport/TServerSocket.h#L103
void setSendTimeout(int sendTimeout);
void setRecvTimeout(int recvTimeout);

How to address RdKafka::ERR__TIMED_OUT and RdKafka::ERR__MSG_TIMED_OUT in librdkafka?

I am working on C++ kafka client librdkafka. Looking into the example https://github.com/edenhill/librdkafka/blob/master/src-cpp/rdkafkacpp.h and https://github.com/edenhill/librdkafka/blob/master/examples/rdkafka_example.cpp, it seems that there is no process of connecting to broker? How to do some reconnect staff for these connection errors? How to check the connection status?
librdkafka abstracts all broker connectivity from the application, it will attempt to always keep a connection to each known broker (either learnt through metadata.broker.list or by the broker list returned from the first bootstrap brokers).
Upon connection error librdkafka will attempt to connect again, forever.
If none of brokers can be connected to the ALL_BROKERS_DOWN event will be triggered but there is currently no corresponding event for when brokers being to come back online.
The application doesn't need to worry though since librdkafka takes care of all reconnects and message retransmissions in the background and it will keep trying to get the messages produced until either message.timeout.ms or message.send.max.retries are exceeded.
There's more information on this in the introduction guide:
https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md

Biztalk web service ports and what happens when the port/application is stopped

I have a question around biztalk and what happens when certain conditions around web service ports are met.
basically we have two applications - a main application (lets call it 'MainApplication') (containing the orchestration) and a web service application (lets call it 'MainApplicationWS'), where we expose a web service (created from biztalks web service tool) to take messages from wherever.
we have a testing tool which replays messages to the MainApplicationWS to simulate messages coming through from various external systems.
I have noticed that if we partial stop the MainApplicationWS application, and send messages through to the web service listed as a recieve location, nothing happens (obviously!) (also, the web service is still running, even though its been delisted as a recieve location). however, if i start up the MainApplicationWS again and bounce the host instances the messages are picked up from somewhere and played through to the orchestration and through to our application.
Im just a bit puzzled as to where its storing these messages while the MainApplicationWS is partially stopped. is the web service somehow hanging on to these? or does it still post through to the biztalk message box?
any clarification would be greatly appreciated :)
cheers,
adam
In short, I can't repeat your behaviour in Biztalk 2009. The closest to 'queueing' messages is if the orchestration is stopped but remains enlisted, such that messages are suspended resumable.
In long - I'm not quite sure what you mean by 'delisted as a receive location'. In Biztalk 2009:
Receive Locations can be enabled or disabled
Orchestrations can be stopped, and unenlisted
A Partial Stop on your BTS application disables receive ports and stops orchestrations (but doesn't unenlist them)
A full stop stops and unenlists orchestrations
The below is observed behaviour on BizTalk 2009 for a simple orchestration with a WCF Request/Response port, which receives a message, Maps the Send back to the same Port
The port is Direct Bound (MessageBox).
If the Isolated Host App Pool is disabled in IIS
A synchronous error is returned to the client - Standard IIS Error (503 Service Unavailable etc)
BizTalk receives no messages at all
If the BizTalk receive Location is disabled
WSDL: Syncrhonous error returned to the client - The Messaging Engine failed to register the adapter for "WCF-BasicHttp" for the receive location "xyz.svc". Please verify that the receive location exists, and that the isolated adapter runs under an account that has access to the BizTalk databases
Service Call : The requested service, xyz.svc could not be activated. See the server's diagnostic trace logs for more information.
If the Orchestration is stopped, but not unenlisted
The received message is Suspended, resumable. The client times out (no response is issued).
If the orch is started and the message resumed, the message is then processed. The client will only get a successful reply if the orch start and the suspended message resume are done before the client's configured WS / WCF timeout.
If the Orchestration is unenlisted
The received message is Suspended, not resumable.
The client receives an error - The server was unable to process the request due to an internal error.
With the WCF CustomBinding it is also possible to listen directly on the relevant BizTalk ReceiveHost (i.e. no need for IIS at all to listen to BasicHTTP or WSHTTP, although we generally still use the Wizard generated svc in IIS solely for the hosting and publication of the WSDL. We then create a new WCF Custom receive location directly in BizTalk and point the client to this)
Hope this helps?