Official gRPC documentation for client streaming states that:
The server sends back a single response, typically but not necessarily after it has received all the client’s requests...
What I'm trying to do is to catch server response in the middle of the stream to stop sending more data.
In Go I can spin up a new goroutine listening for the message from the server using RecvMsg, but I can't find a way to do the same in C++. It looks like ClientWriter doesn't offer this kind of functionality.
One solution would be to have a bidirectional stream but was wondering if there is any other way to achieve this in C++.
Once the response and status are sent by the server and received back at the client(i.e., the client-side gRPC stack) , subsequent attempts to Write() will start failing. The first failing Write() is the signal to the client that it should stop Writing and Finish the RPC.
So the two options here are:
1. Wait for a Write to fail, then call finish to receive the Server's response and status.
2. Switch to Bidirectional Streaming if the client really wants to read the response from the server before calling Finish.
Related
I'm looking into using the Boost::Beast websocket library to create an asynchronous bidirectional pipe to pass data between a server and a client. I leveraged some code from the async example (I can post some at a later time if necessary, don't have access to it now). I currently have a class which creates several threads running a SocketListener. When a client connects, it creates a Session shared_ptr to do the async read and write functions. The problem is, this session object will only write out when the client has sent me a message. I'm looking for an implementation that allows my server to write on demand to all the clients connected to it and also listen for incoming data from those connections.
Is this possible? Am I using the wrong technique for this? The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket. Incoming would allow a client to drop configurations for the server and outgoing would just monitor a message queue and do a async write if a message is available.
Thanks!
Is this possible?
Yes
Am I using the wrong technique for this?
No
The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket, and No respectively.
That is not necessary, a websocket stream is full-duplex. You can read and write at the same time.
outgoing would just monitor a message queue and do a async write if a message is available.
This is the correct approach, but you can do that in the same Session object that also handles the reads.
Here's an example that reads continuously and can also write full-duplex: https://github.com/vinniefalco/CppCon2018
I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.
I have a C++ Server where Clients would connect to the Server socket and fetch Search results. I am using Boost library for my Socket programming.
There will be 5 search results in all for which a Client connects. These Search results are expensive for the Server to compute and the computation is done in an iterative way. Now what happens many times is that the clients disconnect after they have received the results for 2 or 3 Search results. I want to stop the Search processing thread as soon as the client who made the request disconnects. What is the best API call to confirm that ? I am willing to write my own wrapper on top of boost if this is even possible.
I am using HTTP only.
thanks
The only way you can detect a TCP disconnect is by doing I/O to it. After some sends to a peer which has disconnected you will get ECONNRESET. This won't happen on the first send due to TCP buffering.
I'm having trouble sending more than one request to my server.
I'm using the boost asio async_client exemple
The problem is that I always get: Error Asio.misc 2 (eof reached I think).
I don't know if the good way to do this is to have a pool of threads or if I can reuse the same io_service, ...
I don't find good answers on how to do that on the web.
I only try to send another request after I have reach the EOF from the first one.
The client class in the exemple wraps the whole process:
The name resolution process
The connection establishment
The sending of the request
The handling of the response
Once you reached EOF when reading the response, your connection is closed by the server (because of the HTTP Header). Therefor, you have to restart part of the process. You have to first re-establish a connection to the remote server, send your request, and read the response. It's probably not useful to redo name resolution.
If you really want to go the simple way, then creating a new client would probably work.
You don't need a pool of thread and you can certainly re-use your io_service object.
I am writing a client-server application using sockets in C++.
The protocol for communications is essentially:
The client connects to the server.
The client "sends" an ASCII command to the server.
The server executes the command remotely, and gets the results, and sends the results back to the client.
the results can be multiple megabytes of data. Once all the results are sent to the client, I would like the server to signal the client that it's done.
Is the best way to closesocket(), or should it send a message that indicates to the client that there are no more results, and the client can decide whether to close the socket or not? The drawback with closing the socket is that the client will need to establish a new connection if it wants to execute another command, but the drawback of sending a message back from the server is that the client needs to scan every recv to determine if the results are done.
Which is the best practice?
I would take a slightly lateral approach:
Client sends command to server
Server send size of response and then the real response
Client can issue new command / close connection
In this way the client knows how much to read and can decide whether to close the connection or not.