I'm having trouble sending more than one request to my server.
I'm using the boost asio async_client exemple
The problem is that I always get: Error Asio.misc 2 (eof reached I think).
I don't know if the good way to do this is to have a pool of threads or if I can reuse the same io_service, ...
I don't find good answers on how to do that on the web.
I only try to send another request after I have reach the EOF from the first one.
The client class in the exemple wraps the whole process:
The name resolution process
The connection establishment
The sending of the request
The handling of the response
Once you reached EOF when reading the response, your connection is closed by the server (because of the HTTP Header). Therefor, you have to restart part of the process. You have to first re-establish a connection to the remote server, send your request, and read the response. It's probably not useful to redo name resolution.
If you really want to go the simple way, then creating a new client would probably work.
You don't need a pool of thread and you can certainly re-use your io_service object.
Related
Official gRPC documentation for client streaming states that:
The server sends back a single response, typically but not necessarily after it has received all the client’s requests...
What I'm trying to do is to catch server response in the middle of the stream to stop sending more data.
In Go I can spin up a new goroutine listening for the message from the server using RecvMsg, but I can't find a way to do the same in C++. It looks like ClientWriter doesn't offer this kind of functionality.
One solution would be to have a bidirectional stream but was wondering if there is any other way to achieve this in C++.
Once the response and status are sent by the server and received back at the client(i.e., the client-side gRPC stack) , subsequent attempts to Write() will start failing. The first failing Write() is the signal to the client that it should stop Writing and Finish the RPC.
So the two options here are:
1. Wait for a Write to fail, then call finish to receive the Server's response and status.
2. Switch to Bidirectional Streaming if the client really wants to read the response from the server before calling Finish.
I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.
I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.
If I have a UDP server that handles incoming requests with recvfrom, processes the requests that come in (possibly time consuming), possibly sends back a response, and then calls recvfrom again, is it better to create a new sock_fd with the information in sockaddr* from to send the response back with or to use the server's sock_fd to send a response?
Basically, the question is do I want the overhead of having to create a new sock_fd, or do I want my server to be able to handle requests without having to wait to send the previous request a response.
I can't decide based on the application's needs, because this will be used in a library (hence I don't know whether there will need to be a response or not, and how long it will take to process the request).
I fail to see how this is not a real question. The question is clearly asked in the bolded section above, and in the last part of the first sentence
There is no need to create a new sock_fd as the one which is created will have already done a bind call as its a server.
Also you have to ensure that the clients are not waiting for a response in a blocking recvfrom .
Most servers send out some error codes if they cannot give a proper response and the clients do a repeat request or something depending on that error code, may be you need to design the protocol in request-response way.
If processing is a problem hen you can always have the data + struct sockaddr of client in a queue and defer processing by signalling a thread to wakeup, by doing so your listening thread can come back to recvfrom fast, and then you can send the response from the processing thread to the saved struct sockaddr of client when you are finished.
do I want the overhead of having to create a new sock_fd
No.
or do I want my server to be able to handle requests without having to wait to send the previous request a response.
Nobody has to wait to send a message over a UDP socket. You can handle every incoming request on a separate thread if you like, and they can all call sendmsg(), simultaneously if necessary.
You definitely only want to use one socket. For one thing, it will mean that the reply will get back to the client with the same source-address information that they sent it to, which will be less confusing all round.
I am currently trying to implement my own webserver in C++ - not for productive use, but for learning.
I basically open a socket, listen, wait for a connection and open a new socket from which I read the data sent by the client. So far so good. But how do I know the client has finished sending data and not simply temporarily stopped sending more because of some other reason?
My current example: When the client sends a POST-request, it first sends the headers, then two times "\r\n" in a row and then the request body. Sometimes the body does not contain any data. So if the client is temporarily unable to send anything after it sent the headers - how do I know it is not yet finished with its request?
Does this solely depend on the used protocol (HTTP) and it is my task to find this out on the basis of the data I received, or is there something like an EOF for sockets?
If I cannot get the necessary Information from the socket, how do I protect my program from faulty clients? (Which I guess I must do regardless of this, since it might be an attacker and not a faulty client sending wrong data.) Is my only option to keep reading until the request is complete by definition of the protocol or a timeout (defined by me) is reached?
I hope this makes sense.
Btw: Please don't tell me to use some library - I want to learn the basics.
The protocol (HTTP) tells you when the client has stopped sending data. You can't get the info from the socket as the client will leave it open waiting for a response.
As you say, you must guard against errant clients not sending proper requests. Typically in the case of an incomplete request a timeout is applied to the read. If you haven't received anything in 30 seconds, say, then close the socket and ignore it.
For an HTTP post, there should be a header (Content-Length) saying how many bytes to expect after the the end of the headers. If its a POST and there is no Content-Length, then reject it.
"Does this solely depend on the used protocol (HTTP) and it is my task to find this out on the basis of the data I received,"
Correct. You can find the HTTP spec via google;
http://www.w3.org/Protocols/rfc2616/rfc2616.html
"or is there something like an EOF for sockets?"
There is as it behaves just like a file ... but that's not applicable here because the client isn't closing the connection; you're sending the reply ON that connection.
With text based protocols like HTTP you are at the mercy of the client. Most well formatted POST will have a content-length so you know how much data is coming. However the client can just delay sending the data, or it may have had its Ethernet cable removed or just hang, in which case that socket is sitting there indefinitely. If it disconnects nicely then you will get a socket closed event/response from the recv().
Most well designed servers in that case will have a receive timeout, and if the socket is idle for more than say 30 seconds it will close that socket, so resources are not leaked by misbehaving clients.