hiredis , How to check if more data is available to read - c++

I am trying to write connection pool using hiredis.
Problem I am facing is , if user fires a command and didn't read the response from the connection, I should be clearing the response from that connection before putting to connection pool.
Is there any way to check:
Is there more data to read? So I can do redisGetReply , till all data get cleared.
Or is there a way to clear all pending read on connection object ?

Question is not clear, as it fails to state whether you are using sync or async operations.
You mention redisGetReply, I would assume use of sync operations. Sync calls would be blocking calls. Response to commands would be available in the same call. A scenario where you might want to check if all data is read is when context is shared between threads and you check for data before returning connection to pool.
Yes redisGetReply can be used to check if there is more data to read.
For async calls use redisAsyncHandleRead to check if there is data to be read.
Internally both redisGetReply and redisAsyncHandleRead make call to redisBufferRead.
For sync calls use redisFree to clear context.
For Aysnc calls use redisAsyncFree to clear context.

Related

Corda check session.send/receive completeness

I am currently creating some custom flows, sending back and forth some data through the session. I noticed that in some cases (for example if a responder flow has a session.receive still unanswered when the initiating flow finishes), no exceptions are thrown and everything works smoothly, without even a warn log. Is there a way to force the check of send/receive completeness?
If you can provide some log file to demonstrate your use case would be better.
Send & Receive is typically a one-direction communication, one sends and one receives. If you are looking for a confirm receive, you can try to use method sendAndReceive, which
Serializes and queues the given payload object for sending to the counterparty.
Suspends until a response is received, which must be of the given R type.
Receive method itself is a blocking method, so if your flow successfully finishes. it means the receive method successfully receive what it is looking for.
But again, it would be much better if you can share your log and the elaborate on your questions a bit.

C++ server with recv/send commands & request/response design

I'm trying to create a server with blocking sockets (one new thread for each new client). This thread should be able to receive commands from the client (and send back the result) and periodically send commands to the client (and request back the result).
What I've thought is creating two threads for each client, one for recv, second for send. However:
it's double of the normal thread overhead.
due to request/response design, recv I do in the first thread (to wait for client's commands) can be the request I look for in the second thread (client's result to my send) and vice versa. Making it all properly synced is probably a hell story. So now I'm thinking to do that from a single thread this way:
In a loop:
setsockopt(SO_RCVTIMEO, &small_timeout); // set the timeout for the recv (like 1000 ms).
recv(); // check for client's requests first. if returns WSAETIMEDOUT than I assume no data is requested and do nothing. if I get a normal request I handle it.
if (clientbufferToSend != nullptr) send(clientbufferToSend); // now when client's request has been processed we check the command list we have to send to the client. if there is commands in queue, we send them. SO_SNDTIMEO timeout can be set to a large value so we don't deadlock if client looses connection.
setsockopt(SO_RCVTIMEO, &large_timeout); // set the timeout for the recv (as large as SO_SNDTIMEO, just to not deadlock if anything).
recv(); // now we wait the response from the client.
Is this the legal way to do what I want? Or are there better alternatives (preferrably with blocking sockets and threads)?
P.S. Does recv() with timeout returns WSAETIMEDOUT only if no data is available? Can it return this error if there is the data, but recv() wasn't fast enough to handle it all, thus returning partial data?
One approach is only create a background thread for reading from that socket. Write on whatever random thread your unsolicited events are raised.
You’ll need following stuff.
A critical section or mutex per socket to serialize writes, like when background thread is sending response to client-initiated message, and other thread wants to send message to the same client.
Some other synchronization primitive like a conditional variable for client thread to sleep while waiting for responses.
The background thread which receives messages needs to distinguish client-initiated messages (which need to be responded by the same background thread) from responses to server-initiated messages. If your network protocol doesn’t have that data you’ll have to change the protocol.
This will work OK if your server-initiated events are only happening on a single thread, e.g. they come from some serialized source like a device or OS interface.
If however the event source is multithreaded as well, and you want good performance, you gonna need non-trivial complexity to dispatch the responses to the correct server thread, like 1 conditional variable per client thread, maybe some queues, etc.

GRPC C++ Async Server How differentiate between WritesDone and broken connection

When developing an Async C++ GRPC Server, how can I differentiate between the client being done with writing and the connection being broken ?
I am Streaming data from the client to the server and once the client is done it will call WritesDone to let the server know it should finish storing the file. If I have a sync server I can differentiate between the client calling WritesDone and the connection being broken by calling context->IsCancelled() but in async mode you can not call IsCancelled until you get the tag specified in AsyncNotifyWhenDone.
In both cases (WritesDone and Call done) the Read tag gets returned with ok set to false. However, the AsyncNotifyWhenDone tag, which would allow me to differentiate arrives after the read tag.
I will know after I try to call finish (it will also return false) but I need to know before I call finish as my final processing might fail and I can't return the error anymore if I already called finish.
There's no way to distinguish until the AsyncNotifyWhenDone tag returns. It may come after the Read in which case you may need to buffer it up. In the sync API you can check IsCancelled() anytime (and you can also do that in the Callback API which should be available for general use soon).

How to implement long running gRPC async streaming data updates in C++ server

I'm creating an async gRPC server in C++. One of the methods streams data from the server to clients - it's used to send data updates to clients. The frequency of the data updates isn't predictable. They could be nearly continuous or as infrequent as once per hour. The model used in the gRPC example with the "CallData" class and the CREATE/PROCESS/FINISH states doesn't seem like it would work very well for that. I've seen an example that shows how to create a 'polling' loop that sleeps for some time and then wakes up to check for new data, but that doesn't seem very efficient.
Is there another way to do this? If I use the "CallData" method can it block in the 'PROCESS' state until there's data (which probably wouldn't be my first choice)? Or better, can I structure my code so I can notify a gRPC handler when data is available?
Any ideas or examples would be appreciated.
In a server-side streaming example, you probably need more states, because you need to track whether there is currently a write already in progress. I would add two states, one called WRITE_PENDING that is used when a write is in progress, and another called WRITABLE that is used when a new message can be sent immediately. When a new message is produced, if you are in state WRITABLE, you can send immediately and go into state WRITE_PENDING, but if you are in state WRITE_PENDING, then the newly produced message needs to go into a queue to be sent after the current write finishes. When a write finishes, if the queue is non-empty, you can grab the next message from the queue and immediately start a write for it; otherwise, you can just go into state WRITABLE and wait for another message to be produced.
There should be no need to block here, and you probably don't want to do that anyway, because it would tie up a thread that should otherwise be polling the completion queue. If all of your threads wind up blocked that way, you will be blind to new events (such as new calls coming in).
An alternative here would be to use the C++ sync API, which is much easier to use. In that case, you can simply write straight-line blocking code. But the cost is that it creates one thread on the server for each in-progress call, so it may not be feasible, depending on the amount of traffic you're handling.
I hope this information is helpful!

How to notify client of std::iostream that there is no more data

I have a client that passes an iostream to my API, which writes data to it as it arrives over TCP. I'm assuming that the client can continue to make blocking calls to the stream until the TCP data is complete. How do I signify to the client that there is no more data, preferably without a delimiter in the stream? Have I misunderstood streaming with this context? Is there a better way (eg calling back each time a new write occurs into the stream)?