How to set a zmq socket timeout - c++

I've got client and server applications using zmq in the ZMQ_REQ context. What I'm experiencing is that when the server component goes down or is unavailable, the client will wait until it is available to send its message. Is there a way to set a max wait time or timeout at the client level?

I would guess you can use the zmq_poll(3) with a timeout greater then zero.

Related

How to detect a connection failure in Indy TCP Client

I have made a client and a server using Indy TIdTCPClient and TIdTCPServer in C++Builder 11 Alexandria.
I can start the server and connect the client to it correctly, but if I set the server MaxConnections to a value N and I try to connect to it with the N+1 client, the connection does not fail, apparently.
For example: I set MaxConnections=1 in the server, the first client connects to it and the server OnConnect event is raised, while in the client OnStatus event I get two messages:
message 1: Connecting to 10.0.0.16.
message 2: Connected.
I try to connect the second client: the server OnConnect event is NOT raised (and this is what I expect) but in the client OnStatus event I get the same two messages (and this is not what I expect):
message 1: Connecting to 10.0.0.16.
message 2: Connected.
Then, the first client can exchange data with the server, and the second client can't (this seems right).
I don't understand why the second client connection does not fail explicitly, am I doing something wrong?
You are not doing anything wrong. This is normal behavior for TIdTCPServer.
There is no cross-platform socket API at the OS level 1 to limit the number of active/accepted connections on a TCP server socket, only to limit the number of pending connections in the server's backlog. That limit is handled by the TIdTCPServer::ListenQueue property, which is 15 by default (but this is more of a suggestion than a hard limit, the underlying socket stack can override this, if it wants to).
As such, the TIdTCPServer::MaxConnections property is implemented by simply accepting any client from the backlog that attempts to connect, and then immediately disconnects that client if the MaxConnections limit is exceeded.
So, if you try to connect more clients to TIdTCPServer than MaxConnections allows, those extra clients will not see any failure in connecting (unless the backlog fills up), but the server will not fire the OnConnect event for them. From the clients' perspectives, they actually did connect successfully, they were fully accepted by the server's underlying socket stack (the TCP 3way handshake is complete). However, they simply will not process the disconnect until they try to actually communicate with the server, then they will detect the disconnect, usually in the form of an EIdConnClosedGracefully exception (but that is not guaranteed).
1: on Windows only, there is a WSAAccept() function which has a callback that can reject pending connections before they leave the backlog queue. But Indy does not make use of this callback at this time.
Different TCP stacks exhibit different behavior. Your description is consistent with a TCP stack that simply ignores SYNs to a socket that has reached the maximum configured limit of pending and/or accepted connections: the SYN packet is simply dropped on the floor and not acknowledged.
The nature of TCP is that it's supposed to handle network drops. The sender does not immediately bail out, but will keep trying to connect, for some period of time. This part is consistent with all TCP implementations.
If you want your client to quickly fail a connection that does not get established within some set period of time you'll need to implement a manual timeout yourself.

gRPC timeout on server side

I'm working on bidirectional stream gRPC in C++. I want to set a timeout limit on server side, and kill the connection if it exceeds a threshold.
But the only timeout mechanism I've found is on client side(https://grpc.io/blog/deadlines/#c). And I don't find any API possible to for ServerContext(https://grpc.github.io/grpc/cpp/classgrpc_1_1_server_context.html). Does someone know how to do that?
gRPC does not support server side timeout limit/setting and hence you might need to have your own implementation mechanism and update client using context.abort.

Detect half-open websockets with PING/PONG

I'm using Jetty 9.2.24 as a WebSocket server. I want to detect half-open connections, so that no more messages are sent over this connection and buffered instead.
I know PING/PONG frames are used for this, so I tried sending PINGs periodically and set a low maxIdleTimeout. I modified my client to NOT return a PONG to see if Jetty would regard this as a failed connection since the RFC-6455 spec dictates that the remote endpoint MUST respond with a PONG. Apparently Jetty does not detect missing PONGs or I am doing something wrong.
What is the best way to continue. Should I implement the PING/PONG timeouts myself by explicitly receiving all PONG messages and detect a timeout? I would think this would be responsibility of the underlying websocket managing framework.
Note that Jetty 9.2.x is EOL (End of Life) you should consider upgrading.
Setting Max Idle Timeout and then causing the connection to not be idle by sending ping/pong isn't ideal.
The spec says that when you receive a PING you must send a PONG, and Jetty indeed does that.
It does not say that receiving a PONG, or not receiving a PONG, or receiving an unsolicited PONG has any meaning or behavior on it like you think it should.
Jetty 9.4 websocket will only keep a half-open connection open long enough to complete the current message (no matter how many frames it takes) then respond to the CLOSE frame it received (that caused the half-open connection). So half-open is only for the duration of the active message, then CLOSED. If no message is active, then the CLOSE happens immediately.
On Jetty 9.4 you can also add a WebSocketFrameListener and respond accordingly based on the frames received (eg: make the server end the conversation immediately, either via a CLOSE frame, or harsh disconnect)

Keep gRPC client in listen mode for message from server

I have a gRPC server written in C++ which is running on a server say Gabroo
Gabroo:~/grpc/examples/cpp/stream_server$ ./stream_server
DB parsed, loaded 1 features.
Server listening on 0.0.0.0:50051
The client is running on same server and exits after receiving the message.
Gabroo:~/grpc/examples/cpp/stream_server$ ./stream_client
DB parsed, loaded 1 features.
-------------- GetFeature --------------
Found feature called PatriotsPath,Mendham,NJ07945,USA at 40.7838, -74.6144
Found no feature at 0, 0
Now if the server wants to send a message to client but client is not listening for any message is there some configuration needed so that client is in listen mode continuously for stream messages from server.
If it is not available inbuilt would infinite loop and checking for message every 1 secs be a good approach. I personally don't like this approach.
Regards !!!
This can be solved using RPCs of different arity. Most generally, you could define a bi directional stream between client and server. That way, if a stream is open, the client will be listening, ready to receive messages from the server.
If you use case is more specific, and you only need on client RPC per stream, then you could consider using Server streaming RPC.

Redis PUBSUB connection issue after idle period

I am using nelikelov/redisclient version 0.5.0 and I am using code same as in the PUBSUB example provided in the library. My application subscribes to a channel and receives messages.
What I am facing is that every Monday, the application is not being able to receive messages from Redis.
Is there any timeout that I should handle in case the connection remains idle during the weekend? Shall I configure something extra in my application or in Redis to bypass this?
I'm not familiar with the client you're using, but Redis itself doesn't close idle connections (PubSub or not) by default and keeps them alive. You can verify that your Redis server is configured to maintain idle connections and keep them alive by examining the values of the timeout and tcp-keepalive directives (0 and 300 by default, respectively).
Other than the above and given the periodical aspect of the disconnects, I'd investigate the network settings of the client application server.