WinSock recv() end of message - c++

Consider the following scenario:
I have a server and a client, and they communicate in a custom defined application protocol.
A typical conversation is like this: (Assuming a connection between the two)
Client sends message to server, and waits for acknowledgment (don't
confuse with TCP) that it's message has been processed
Server parses the message and as soon as it reached the end it sends
an acknowledgment back to the client that it has processed it's
message
Client gets the acknowledgment and can now freely send another
message and wait again for an acknowledgment etc etc.
I have come to a problem implementing this. I am looping recv() parsing the message but as soon as recv has nothing more to return it blocks, and my code can't proceed to argue that I've received the whole message so that it sends an acknowledgment.
I've managed to somehow come around this by sending a message size in the header of the message and counting each time whether I've read as many bytes as the message size, but this is not ideal; what if the client machine bugged and it sent an incorrect size?
Also, I've read in another question that this whole acknowledgment process I'm doing is not necessary when using tcp, and that tcp already does that for you; but isn't the handshake only done once per connection? What if I don't close the connection between the two? Then only one handshake at the creation of the connection would have happened.
Do I have to create a new connection every time I want to be sending something? And isn't the handshake done before the data is sent, and only when making the connection? So this might not be what I'm looking for in my case.

Related

c++ Winsock send,recv -how they work

I'm new in network programming and I'm trying to understand how functions like send and recv work under the hood in a TCP connection.I know that in a connection between a client and a server for example ,when the client manage to send a message to the server ,the message is split in different packages and at it's arrival,the server part checks to see if the sum of the packages is the same as it was before sending,and if is ok it sends a message back to the client as an approval.If a problem appears the client resends the message.
What I don't understand is that if you send a message from the client and you sleep the server for 10 seconds,you can still do what you want in the client,like the send function is executing in another thread ,or if you use multiple times send function in these 10 seconds,the message arrives as a combination of the messages used in that time.
If anyone can explain the situation ,I'll be very grateful !
This is implemented by the TCP/IP networking stack of your operating system.
The TCP/IP stack ...
provides a send buffer. When your program sends, the OS first fills internal buffers. You app can send immediately until the buffers are full. Then your send will block.
takes data from the internal buffer and sends it out onto the network in single packets.
receives data over the network and fills internal receive buffers with that data.
gives your program the data from the internal buffers when you call receive.
takes care of the TCP/IP protocol stuff like establishing connections, acknowledging received data, resending data if no receive acknowledge was received.
In the case you wrote the client is filling the sender OS's send buffer and the receiver OS's receive buffer. Your client can send non-blocking until both buffers are full. Then it will block until the server calls recv again.

Websocket and reception of a message

I try to make a server "messages" via websocket under boost.
Currently, I can often send large messages or series of messages from the server.
when I hit "send", it sends tons of data.
The difficulty is that when the server receives a command in a websocket message like "Stop", "Pause" ... this command runs until the end of the previous message. I try to stop the execution of the previous command.
I tried to read the buffer between sending data. but it does not work. I try to check if there is one receiving orders with async_read_some.
I based on the example of
http://www.codeproject.com/Articles/443660/Building-a-basic-HTML5-client-server-application
and HTTP server boost
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/examples.html
Do you have any idea? I reworked my code several times but I can not execute the new real-time control as it appears at the end ..
thank you
If the data has already been sent to the network adapter, there is very little you can do to alter the order of packets. Network adapter will send the packets as and when it gets round to it, in the order you've queued them.
If you want to be able to send "higher priority" messages, then don't send off all the data in one go, but hold it in a queue waiting for the device to accept more data, and if a high priority message comes in, send that before you send any of the other packets off.
Don't make the packets TOO small, but I guess if you make packets that are a few kilobytes or so at a time, it will work quite swiftly and still allow good control over the flow.
Of course, this also will require that the receiver has the understanding of "there may be different 'flows' or 'streams' of information, and if you send a 'pause' command, it means that the previously sent stream is not going to receive anything until 'resume' is sent" obviously adjust this as needed for the behaviour you need, but you do need some way to not just say "put 'STOP' as data into the rest of the flow", but interpret it as a command.
If you send large message in the network as a single packet by the time server receives all the data the server receives stop message you may not have control over it until you complete receiving data.
It's better you implement priority message queue. Send the message as small chunks from client and assemble server instead of single large packet. Give message packets like stop(cancel) high priority. While receiving the messages at server end if any high priority message exists like stop(cancel) you don't need to accept remaining messages you can close the websocket connection at server.
Read the thread Chunking WebSocket Transmission for more info.
As you are using Boost, have you looked at WebSocket++ (Boost/ASIO based)?

Reusing sock_fd For UDP Server Response vs New sock_fd

If I have a UDP server that handles incoming requests with recvfrom, processes the requests that come in (possibly time consuming), possibly sends back a response, and then calls recvfrom again, is it better to create a new sock_fd with the information in sockaddr* from to send the response back with or to use the server's sock_fd to send a response?
Basically, the question is do I want the overhead of having to create a new sock_fd, or do I want my server to be able to handle requests without having to wait to send the previous request a response.
I can't decide based on the application's needs, because this will be used in a library (hence I don't know whether there will need to be a response or not, and how long it will take to process the request).
I fail to see how this is not a real question. The question is clearly asked in the bolded section above, and in the last part of the first sentence
There is no need to create a new sock_fd as the one which is created will have already done a bind call as its a server.
Also you have to ensure that the clients are not waiting for a response in a blocking recvfrom .
Most servers send out some error codes if they cannot give a proper response and the clients do a repeat request or something depending on that error code, may be you need to design the protocol in request-response way.
If processing is a problem hen you can always have the data + struct sockaddr of client in a queue and defer processing by signalling a thread to wakeup, by doing so your listening thread can come back to recvfrom fast, and then you can send the response from the processing thread to the saved struct sockaddr of client when you are finished.
do I want the overhead of having to create a new sock_fd
No.
or do I want my server to be able to handle requests without having to wait to send the previous request a response.
Nobody has to wait to send a message over a UDP socket. You can handle every incoming request on a separate thread if you like, and they can all call sendmsg(), simultaneously if necessary.
You definitely only want to use one socket. For one thing, it will mean that the reply will get back to the client with the same source-address information that they sent it to, which will be less confusing all round.

Sockets in Linux - how do I know the client has finished?

I am currently trying to implement my own webserver in C++ - not for productive use, but for learning.
I basically open a socket, listen, wait for a connection and open a new socket from which I read the data sent by the client. So far so good. But how do I know the client has finished sending data and not simply temporarily stopped sending more because of some other reason?
My current example: When the client sends a POST-request, it first sends the headers, then two times "\r\n" in a row and then the request body. Sometimes the body does not contain any data. So if the client is temporarily unable to send anything after it sent the headers - how do I know it is not yet finished with its request?
Does this solely depend on the used protocol (HTTP) and it is my task to find this out on the basis of the data I received, or is there something like an EOF for sockets?
If I cannot get the necessary Information from the socket, how do I protect my program from faulty clients? (Which I guess I must do regardless of this, since it might be an attacker and not a faulty client sending wrong data.) Is my only option to keep reading until the request is complete by definition of the protocol or a timeout (defined by me) is reached?
I hope this makes sense.
Btw: Please don't tell me to use some library - I want to learn the basics.
The protocol (HTTP) tells you when the client has stopped sending data. You can't get the info from the socket as the client will leave it open waiting for a response.
As you say, you must guard against errant clients not sending proper requests. Typically in the case of an incomplete request a timeout is applied to the read. If you haven't received anything in 30 seconds, say, then close the socket and ignore it.
For an HTTP post, there should be a header (Content-Length) saying how many bytes to expect after the the end of the headers. If its a POST and there is no Content-Length, then reject it.
"Does this solely depend on the used protocol (HTTP) and it is my task to find this out on the basis of the data I received,"
Correct. You can find the HTTP spec via google;
http://www.w3.org/Protocols/rfc2616/rfc2616.html
"or is there something like an EOF for sockets?"
There is as it behaves just like a file ... but that's not applicable here because the client isn't closing the connection; you're sending the reply ON that connection.
With text based protocols like HTTP you are at the mercy of the client. Most well formatted POST will have a content-length so you know how much data is coming. However the client can just delay sending the data, or it may have had its Ethernet cable removed or just hang, in which case that socket is sitting there indefinitely. If it disconnects nicely then you will get a socket closed event/response from the recv().
Most well designed servers in that case will have a receive timeout, and if the socket is idle for more than say 30 seconds it will close that socket, so resources are not leaked by misbehaving clients.

internal working of the recv socket api

I am working on TCP client server application using c++.third party lib are now allowed in this project.
Here exchange between client server takes using well define protocol format.once the client receives the packet it will send it for parsing.I have protocol manager which will take care of the parsing activity.
I have following doubt
When the data arrives at client from the network,
the OS buffers it until application calls recv() function.
So two message msg1 and msg2 arrives at the buffer a call to recv will return msg1+msg2.
Now this may result in failure of the parsing activity.
My queries
1. whether above mentioned assumption is correct or not ?
2. If above mentioned assuption is correct then how can resolve this issue.
TCP emulates a stream, so in TCP there is no notion of messages. If you want messages, your application has to have some protocol to seperate them.
UDP does have a messages, so there it is possible to retrieve separate messages.
You can use a LV protocol (Length-Value) for your message.
Encode the message length in the first (1-4) bytes, then put the message.
Something like this for example : 14"Hello world\0"
In your server, when a client is sending something you'll have to recv() the first (1-4) bytes then recv() the length of the message.