GRPC C++ Async Server How differentiate between WritesDone and broken connection - c++

When developing an Async C++ GRPC Server, how can I differentiate between the client being done with writing and the connection being broken ?
I am Streaming data from the client to the server and once the client is done it will call WritesDone to let the server know it should finish storing the file. If I have a sync server I can differentiate between the client calling WritesDone and the connection being broken by calling context->IsCancelled() but in async mode you can not call IsCancelled until you get the tag specified in AsyncNotifyWhenDone.
In both cases (WritesDone and Call done) the Read tag gets returned with ok set to false. However, the AsyncNotifyWhenDone tag, which would allow me to differentiate arrives after the read tag.
I will know after I try to call finish (it will also return false) but I need to know before I call finish as my final processing might fail and I can't return the error anymore if I already called finish.

There's no way to distinguish until the AsyncNotifyWhenDone tag returns. It may come after the Read in which case you may need to buffer it up. In the sync API you can check IsCancelled() anytime (and you can also do that in the Callback API which should be available for general use soon).

Related

Hard abort in Flask to close or ignore the client connection without sending a response

I have some application-level security measures. I'd like to just kill the client connection if we suspect that the current request is suspicious rather than returning a proper response. To produce an ambiguous response to the client that seeks to avoid an outright acknowledgement that they found a webserver. What about being able to call a function right when the incoming connection is accepted and the header bytes are first read? I've tried to just close the request stream from a before_request function or close the response stream in a after_request function, but the former has no effect and the latter will just close the socket after already having written the status and headers.
I did a heavy number of searches into both the lifecycles of Flask and Werkzeug, but didn't turn up anything. It seems like no one has ever asked the connection-abort question before.
It seems like I should be able to catch where the start_response callback is called by Flask and either write my own or intercept it and return my own no-op write function so that the client connection is effectively never acted on, but this requires more research. I couldn't seem to find anywhere in Flask or Werkzeug that actually calls start_response or anything that might refer to this by an alternate name before I ran out of time to look.
Reference: https://github.com/pallets/werkzeug/blob/c7ae2fea4fb229ffd71187c2b665874c91b96277/src/werkzeug/serving.py#L250

Howto implement http streaming using libcurl

I am trying to use CURL to implement Microsoft’s EWS Streaming Notifications i.e. HTTP Streaming where the request is sent once and the server responds with a header with "Transfer Encoding: chunked". The server will send multiple keepalive or notification chunks before the final packet. The chunks are terminated with cr lf.
If I create a standard CURL client then curl_easy_perform will not return until the final chunk is received whereas I need curl_easy_perform to return upon receipt of each chunk whereupon the application will process the received chunk and call curl_easy_perform to wait for the next chunk.
I realize that I could process the chunk in the CURLOPT_WRITEFUNCTION callback but the architecture of the application doesn’t allow for that (this is a GSOAP plugin)
Any suggestions other than switching to CURLOPT_CONNECT_ONLY and handling the write all subsequent reads with curl_easy_send and curl_easy_recv? Which seems a shame as I will have to duplicate CURL’s formatting and parsing.
Alan
curl_easy_perform is completely synchronous and will return only once the entire transfer is done. There's really no way around that (you already mentioned CURLOPT_CONNECT_ONLY and I wouldn't recommend that either) with this API.
If you want control back in the same thread before the entire transfer is done, which your question suggests, you probably rather want to use the multi interface.
Using that interface, curl_multi_perform will only do as much as it can right now without blocking and return control back to your function. It does however put the responsibility over to your code to wait for socket activity and call libcurl again when there is.
(Sorry, but I don't know what restrictions a "GSOAP plugin" has and you didn't state them here, so maybe this is all crap)

Boost asio - synchronous write / read - how to do?

First, I want to say that I'm new with Boost asio, and I see a lot of examples but it remains things I don't understand.
I want to create a server, that will accept two clients (it will use two socket). The first client will send messages to the server and the server will send this message to the other client (yes, it is useless to use a server, but it's not the point here, I want to understand how all this work). This will happen until one of the client close.
So, I created a server, the server wait for the clients, and then, it must wait for the first client to send some message. And this is my question: what must I do after?
I thought I need to read the first socket, and then write on the second, and so and so, but how I know if the first client writed on the socket? Same, how I know if the second client read the second socket?
I don't need code, I just want to know the good way to do that.
Thanks a lot for reading!
When you perform async_read you specifify a callback which is going to be called whenever any data is read to the buffer ( you should provide the buffer also, check the async_read's documentation ). Respectively you should provide callback for the async_write to know when your data is already sent. So, from the server perspective, for the client which 'writes' you should do async_read, and for the second client which 'reads' you should do async write. With the offered dataflow client1->server->client2 it is hard to recognize which client the server should read from and which one is write to. It's up to you. You can choose the first connected client as writer and the second as reader, for example.
You might want to start with asio iostreams. It's a high-level iostream-like abstraction above asynchronous sockets.
P.S.: also, don't forget to run io_service.run() loop somewhere. Because all the asio callbacks are executed within that loop.

hiredis , How to check if more data is available to read

I am trying to write connection pool using hiredis.
Problem I am facing is , if user fires a command and didn't read the response from the connection, I should be clearing the response from that connection before putting to connection pool.
Is there any way to check:
Is there more data to read? So I can do redisGetReply , till all data get cleared.
Or is there a way to clear all pending read on connection object ?
Question is not clear, as it fails to state whether you are using sync or async operations.
You mention redisGetReply, I would assume use of sync operations. Sync calls would be blocking calls. Response to commands would be available in the same call. A scenario where you might want to check if all data is read is when context is shared between threads and you check for data before returning connection to pool.
Yes redisGetReply can be used to check if there is more data to read.
For async calls use redisAsyncHandleRead to check if there is data to be read.
Internally both redisGetReply and redisAsyncHandleRead make call to redisBufferRead.
For sync calls use redisFree to clear context.
For Aysnc calls use redisAsyncFree to clear context.

Boost ASIO Network Timing Issue

I am using boost::asio to implement network programming and running into timing issues. The issue is currently most with the client.
The protocol initially begins by the server returning a date time string to the user, and the client reads it. Up to that part it works fine. But What I also want is to be able to write commands to the server which then processes them. To accomplish this I use the io_service.post() function as shown below.
io_service.post(boost::bind()); // bounded function calls async_write() method.
For some reason the write tries happens before the initial client/server communication, when the socket has not been created yet. And I get bad socket descriptor error.
Now the io_service's run method is indeed called in another thread.
When I place a sleep(2) command before post method, it work fine.
Is there way to synchronize this, so that the socket is created before any posted calls are executed.
When creating the socket and establishing the connection using boost::asio, you can define a method to be called when these operations have either completed or failed. So, you should trigger your "posted call" in the success callback.
Relevant methods and classes are :
boost::asio::ip::tcp::resolver::async_resolve(...)
boost::asio::ip::tcp::socket::async_connect(...)
I think the links below
will give u some help
http://www.boost.org/doc/libs/1_42_0/doc/html/boost_asio/reference/io_service.html