Websocket and reception of a message - c++

I try to make a server "messages" via websocket under boost.
Currently, I can often send large messages or series of messages from the server.
when I hit "send", it sends tons of data.
The difficulty is that when the server receives a command in a websocket message like "Stop", "Pause" ... this command runs until the end of the previous message. I try to stop the execution of the previous command.
I tried to read the buffer between sending data. but it does not work. I try to check if there is one receiving orders with async_read_some.
I based on the example of
http://www.codeproject.com/Articles/443660/Building-a-basic-HTML5-client-server-application
and HTTP server boost
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/examples.html
Do you have any idea? I reworked my code several times but I can not execute the new real-time control as it appears at the end ..
thank you

If the data has already been sent to the network adapter, there is very little you can do to alter the order of packets. Network adapter will send the packets as and when it gets round to it, in the order you've queued them.
If you want to be able to send "higher priority" messages, then don't send off all the data in one go, but hold it in a queue waiting for the device to accept more data, and if a high priority message comes in, send that before you send any of the other packets off.
Don't make the packets TOO small, but I guess if you make packets that are a few kilobytes or so at a time, it will work quite swiftly and still allow good control over the flow.
Of course, this also will require that the receiver has the understanding of "there may be different 'flows' or 'streams' of information, and if you send a 'pause' command, it means that the previously sent stream is not going to receive anything until 'resume' is sent" obviously adjust this as needed for the behaviour you need, but you do need some way to not just say "put 'STOP' as data into the rest of the flow", but interpret it as a command.

If you send large message in the network as a single packet by the time server receives all the data the server receives stop message you may not have control over it until you complete receiving data.
It's better you implement priority message queue. Send the message as small chunks from client and assemble server instead of single large packet. Give message packets like stop(cancel) high priority. While receiving the messages at server end if any high priority message exists like stop(cancel) you don't need to accept remaining messages you can close the websocket connection at server.
Read the thread Chunking WebSocket Transmission for more info.

As you are using Boost, have you looked at WebSocket++ (Boost/ASIO based)?

Related

How to keep a HTTP long-polling connection open?

I want to implement long polling in a web service. I can set a sufficiently long time-out on the client. Can I give a hint to intermediate networking components to keep the response open? I mean NATs, virus scanners, reverse proxies or surrounding SSH tunnels that may be in between of the client and the server and I have not under my control.
A download may last for hours but an idle connection may be terminated in less than a minute. This is what I want to prevent. Can I inform the intermediate network that an idle connection is what I want here, and not because the server has disconnected?
If so, how? I have been searching around four hours now but I don’t find information on this.
Should I send 200 OK, maybe some headers, and then nothing?
Do I have to respond 102 Processing instead of 200 OK, and everything is fine then?
Should I send 0x16 (synchronous idle) bytes every now and then? If so, before or after the initial HTTP status code, before or after the header? Do they make it into the transferred file, and may break it?
The web service / server is in C++ using Boost and the content file being returned is in Turtle syntax.
You can't force proxies to extend their idle timeouts, at least not without having administrative access to them.
The good news is that you can design your long polling solution in such a way that it can recover from a connection being suddenly closed.
One such design would be as follows:
Since long polling is normally used for event notifications (think the Observer pattern), you associate a serial number with each event.
The client makes a GET request carrying the serial number of the last event it has seen, either as part of the URL or in a cookie.
The server maintains a buffer of recent events. Upon receiving a GET request from the client, it checks if any of the buffered events need to be sent to the client, based on their serial numbers and the serial number provided by the client. If so, all such events are sent in one HTTP response. The response finishes at that point, in case there is a proxy that wants to buffer the whole response before relaying it further.
If the client is up to date, that is it didn't miss any of the buffered events, the server is delaying its response till another event is generated. When that happens, it's sent as one complete HTTP response.
When the client receives a response, it immediately sends a new one. When it detects the connection was closed, it creates a new one and makes a new request.
When using cookies to convey the serial number of the last event seen by the client, the client side implementation becomes really simple. Essentially you just enable cookies on the client side and that's it.

WinSock recv() end of message

Consider the following scenario:
I have a server and a client, and they communicate in a custom defined application protocol.
A typical conversation is like this: (Assuming a connection between the two)
Client sends message to server, and waits for acknowledgment (don't
confuse with TCP) that it's message has been processed
Server parses the message and as soon as it reached the end it sends
an acknowledgment back to the client that it has processed it's
message
Client gets the acknowledgment and can now freely send another
message and wait again for an acknowledgment etc etc.
I have come to a problem implementing this. I am looping recv() parsing the message but as soon as recv has nothing more to return it blocks, and my code can't proceed to argue that I've received the whole message so that it sends an acknowledgment.
I've managed to somehow come around this by sending a message size in the header of the message and counting each time whether I've read as many bytes as the message size, but this is not ideal; what if the client machine bugged and it sent an incorrect size?
Also, I've read in another question that this whole acknowledgment process I'm doing is not necessary when using tcp, and that tcp already does that for you; but isn't the handshake only done once per connection? What if I don't close the connection between the two? Then only one handshake at the creation of the connection would have happened.
Do I have to create a new connection every time I want to be sending something? And isn't the handshake done before the data is sent, and only when making the connection? So this might not be what I'm looking for in my case.

c++ Winsock send,recv -how they work

I'm new in network programming and I'm trying to understand how functions like send and recv work under the hood in a TCP connection.I know that in a connection between a client and a server for example ,when the client manage to send a message to the server ,the message is split in different packages and at it's arrival,the server part checks to see if the sum of the packages is the same as it was before sending,and if is ok it sends a message back to the client as an approval.If a problem appears the client resends the message.
What I don't understand is that if you send a message from the client and you sleep the server for 10 seconds,you can still do what you want in the client,like the send function is executing in another thread ,or if you use multiple times send function in these 10 seconds,the message arrives as a combination of the messages used in that time.
If anyone can explain the situation ,I'll be very grateful !
This is implemented by the TCP/IP networking stack of your operating system.
The TCP/IP stack ...
provides a send buffer. When your program sends, the OS first fills internal buffers. You app can send immediately until the buffers are full. Then your send will block.
takes data from the internal buffer and sends it out onto the network in single packets.
receives data over the network and fills internal receive buffers with that data.
gives your program the data from the internal buffers when you call receive.
takes care of the TCP/IP protocol stuff like establishing connections, acknowledging received data, resending data if no receive acknowledge was received.
In the case you wrote the client is filling the sender OS's send buffer and the receiver OS's receive buffer. Your client can send non-blocking until both buffers are full. Then it will block until the server calls recv again.

Best way to send data client server

What is the best way to handle data that needs to get send to the server? I have an multi thread client, in all threads there is data that needs to get send to the server. But when I launch the server there are some times packets that are send at the same time. So the data is not correct at that time.
I thought, lets make a stack that gets send to the server every x ms. Is this a good way to do this?
You can use message queue structure. There will only one queue in the server and every time a message arrives at the queue its added to the end of the queue, therefore even the messages are sent at the same time they will be ordered. After that process the message in the queue by dequeuing the messages. There are many open source message queue structures you can use, so you do not have to implement it from scratch.
You do not have to wait x seconds to send the data to the server in this structure. This will make your system faster.
Hope it helps
Open one socket per client-thread. That way the server can separate from which thread it comes from and everything is kept in order.

Is it normal for WSASend to fail during big file transfers?

I need a little help if someone's got a minute.
I've written a web server using IO completion ports, but I am having some trouble sending out large files. Web pages seem to load fine, but during large file transfers, WSASend() fails after a few minutes with error "The specified network name is no longer available."
Right now, my server just closes the associated connection when any overlapped operation fails. Is this the right thing to do? or should I retry failed overlapped operations a few times before I close the socket? I am using tcp/stream sockets.
(fixed) I am also receiving what seems like random 0 byte packets from WSARecv. I am not sure what to make of this, or if the problem is related.(/fixed)
Thanks for any help
edit: now that the server properly handles connections, and has a much more comprehensive log, it seems like Len is right. The client is closing the connection for some reason.
The log:
Initializing Windows Sockets...
Forwarding port 80...
Starting server...
Waiting for incoming connections...
Socket 1128: Client connected.
Socket 1128: Request received
Socket 1128: Sent response
Socket 1128: Error 64: SendChunk() failed. //WSASend()
Socket 1128: Closing connection - GetQueueCompletionStatus == FALSE
so the question is now, why would the client close the connection? It takes anywhere from 2-5 minutes to happen. I have decreased the buffer size to 4098 bytes per send, and only send the next chunk when the first has completed.
Thanks again for any ideas on this.
p.s. I even just implemented a retry function so that it will retry a failed overlapped IO operation five times before giving up....still no luck =(
A zero length packet returned from recv indicates client on the other end has closed the connection.
Which answers why your subsequent send to the client failed.
http://www.opengroup.org/onlinepubs/009695399/functions/recv.html
If no messages are available to be
received and the peer has performed an
orderly shutdown, recv() shall return
0.
Are you doing anything to impose some form of flow control on your data transmission?
If not then you are probably using up resources which is causing the send to fail.
For example, if you are simply issuing LOTS of WSASend() calls one after the other rather than pacing them based on when they complete then each one will use system resources (non-paged pool and/or lock pages which go towards the 'locked pages limit'). You'll then likely eventually fail with ENOBUFS or similar errors.
What you need to do is build a flow control system that works off of the send completions so that you only ever have a known number of sends outstanding at a time.
See these questions for more detail:
Implement a good performing "to-send" queue with TCP
Limiting TCP sends with a "to-be-sent" queue and other design issues
Finally figured it out.
from Rogers Internet Terms of Service:
Without limitation, you may not use (or allow anyone else to use) our Services to:
(xvi) operate a server in connection with the Services, including, without limitation, >mail, news, file, gopher, telnet, chat, Web, or host configuration servers, multimedia >streamers or multi-user interactive forums;
how lame is that? O_o
good news: server works fine =)
edit- called Rogers. They verified that they are cutting me off, and told me that I need a business account to run a web server.