(websocket) Golang failed syncerous data locking - Broken Pipe - concurrency

I'm currently trying to write up a simple websocket which lists all the client lists to each connected client when somebody joins or leaves the websocket connection pool.
Currently I'm using the (R)Lock and unlock to make sure that there is a concurrent connection going on to avoid any interference between the connections.
Whenever I try to access the connection pool I lock it, this includes both reading and writing to the pool but for some strange reason ONLY when I bruteforce the websocket by sending 100 concurrent connections at once and I end them all I get a Broken Pipe Error.
From the looks of it the error occurs right after removing the client and broadcasting the new client list.
Could you figure out why it's failing to send the connection pool to each client when somebody looses a connection? Keep in mind that this only happens when I bruteforce the websocket connection by having a for loop which creates 100 connections and when ending the connection list it would fail.
I added a note where it fails.
Another side note is if you guys have a better way to send the connection pool other then by looping through it and storing the UUID into a string array then feel free to tell me about it aswell but right now I'm mainly focusing on debugging this problem as I'd like to figure out where I'm failing.
Edit: Forgot to add source:
Websocket source: https://gist.github.com/anonymous/eaaf2e5430ed694bc424
Stress Test source: https://gist.github.com/anonymous/92ad79ffee1afdfd3382

So it turns out that the exception only seems to be a problem somehow when catching it in broadcastMessage (Refer to the websocket source).
As you can see, I catched the error of WriteMessage in the broadcastMessage function.
Not entirely sure why it isn't a problem when not catching it but I'll create another question about it.
Thanks anyway to those who chose to read my the problem I was having!
For those who are interested, here is the post https://stackoverflow.com/questions/26235760/golang-websocket-broken-pipe-error-only-when-catching-sending-message-to-clien

Related

Client doesn't detect Server disconnection

In my application (c++) I have a service exposed as:
grpc foo(stream Request) returns (Reply) { }
The issue is that when the server goes down (CTRL-C) the stream on the client side keeps going indeed the
grpc::ClientWriter::Write
doesn't return false. I can confirm that using netstat I don't see any connection between the client and the server (apart a TIME_WAIT one that after a while goes away) and the client keeps calling that Write without errors.
Is there a way to see if the underlying connection is still up instead to rely on the Write return value ? I use grpc version 1.12
update
I discovered that the underlying channel goes in status IDLE but still the ClientWriter::Write doesn't report the error, I don't know if this is intended. During the streaming I'm trying now to reestablish a connection with the server every time the channel status is not GRPC_CHANNEL_READY
This could happen in a few scenarios but the most common element is a connection issue. We have KEEPALIVE support in gRPC to tackle exactly this issue. For C++, please refer to https://github.com/grpc/grpc/blob/master/doc/keepalive.md on how to set this up. Essentially, endpoints would send pings at certain intervals and expect a reply within a certain timeframe.

Design a multi client - server application, where client send messages infrequent

I have to design a server which can able to send a same objects to many clients. clients may send some request to the server if it wants to update something in the database.
Things which are confusing:
My server should start the program (where I perform some operation and produce 'results' , this will be send to the client).
My server should listen to the incoming connection from the client, if any it should accept and start sending the ‘results’.
Server should accept as many clients as possible (Not more than 100).
My ‘result' should be secured. I don’t want some one take my ‘result' and see what my program logics look like.
I thought point 1. is one thread. And point 2. is another thread and it going to create multiple threads within its scope to serve point 3. Point 4 should be taken by my application logic while serialising the 'result' rather the server.
Is it a bad idea? If so where can i improve?
Thanks
Putting every connection on a thread is very bad, and is apparently a common mistake that beginners do. Every thread costs about 1 MB of memory, and this will overkill your program for no good reason. I did ask the very same question before, and I got a very good answer. I used boost ASIO, and the server/client project is finished since months, and it's a running project now beautifully.
If you use C++ and SSL (to secure your connection), no one will see your logic, since your programs are compiled. But you have to write your own communication protocol/serialization in that case.

Multithreded udp-Server vs. Non blocking calls

First questions here. I gave searched for this but haven't found any solution which fully answers my problem here. I'm using c++ and need to write a kind of usp chat (server and client) for programs to interact with one another. Well atm it works quite well.
I'm using Googles protobuf as Messages.
I've written it like that:
Server has a list of users curently logged in as well as a list of messages to process and distrubute.
One thread handles all receiving on its socket (I'm using one socket).
If command var in the message is login,
It looks through the list and checks for this combination of port and IP. If not in, the chat creates a new user entry.
If command logout, the server looks for the user in list and deletes it.
If command is message, the server looks if user is logged in and puts it on the message list.
The 2nd thread is for sending.
It waits till there is a message in the list and then cycles through all users to send this messages to their sockets except the sending one.
The server has set options on its socket to receive from any ip.
My question now is: is this the most performat solution?
I've read about select and poll. But it's always about multiple receiving sockets while I have only one.
I know the receiving thread may be idling all the time but in my environment there would be a high frequent message input.
I'm fairly new to socket programming but I think this was the most elegant solution. I was wondering if I could even create another thread which gets a list from receiving thread to process the messages.
Edit: how could I detect time outs?
I mean I could have a variable in the user list which increases or get set to 0. But what if messages won't come frequently. Maybe a server based ping message? Or maybe a flag on the message which get set to acknowledged and gets resend.
On the user side I need to first broadcast to find the server and then set port and up accordingly to the answer. How could I do that?
Because this should run autonomous. Meaning a client should detect dmthe server, login, sends its commands and detect wether it is still online.
On server side I don't know if this is so important. Possibly there might be a memory issue if there are too many things connected and non get logged off. Maybe I set a 1 h timeout to let it detect idle clients.

Boost Asio how to send multiple requests

I'm having trouble sending more than one request to my server.
I'm using the boost asio async_client exemple
The problem is that I always get: Error Asio.misc 2 (eof reached I think).
I don't know if the good way to do this is to have a pool of threads or if I can reuse the same io_service, ...
I don't find good answers on how to do that on the web.
I only try to send another request after I have reach the EOF from the first one.
The client class in the exemple wraps the whole process:
The name resolution process
The connection establishment
The sending of the request
The handling of the response
Once you reached EOF when reading the response, your connection is closed by the server (because of the HTTP Header). Therefor, you have to restart part of the process. You have to first re-establish a connection to the remote server, send your request, and read the response. It's probably not useful to redo name resolution.
If you really want to go the simple way, then creating a new client would probably work.
You don't need a pool of thread and you can certainly re-use your io_service object.

Is it normal for WSASend to fail during big file transfers?

I need a little help if someone's got a minute.
I've written a web server using IO completion ports, but I am having some trouble sending out large files. Web pages seem to load fine, but during large file transfers, WSASend() fails after a few minutes with error "The specified network name is no longer available."
Right now, my server just closes the associated connection when any overlapped operation fails. Is this the right thing to do? or should I retry failed overlapped operations a few times before I close the socket? I am using tcp/stream sockets.
(fixed) I am also receiving what seems like random 0 byte packets from WSARecv. I am not sure what to make of this, or if the problem is related.(/fixed)
Thanks for any help
edit: now that the server properly handles connections, and has a much more comprehensive log, it seems like Len is right. The client is closing the connection for some reason.
The log:
Initializing Windows Sockets...
Forwarding port 80...
Starting server...
Waiting for incoming connections...
Socket 1128: Client connected.
Socket 1128: Request received
Socket 1128: Sent response
Socket 1128: Error 64: SendChunk() failed. //WSASend()
Socket 1128: Closing connection - GetQueueCompletionStatus == FALSE
so the question is now, why would the client close the connection? It takes anywhere from 2-5 minutes to happen. I have decreased the buffer size to 4098 bytes per send, and only send the next chunk when the first has completed.
Thanks again for any ideas on this.
p.s. I even just implemented a retry function so that it will retry a failed overlapped IO operation five times before giving up....still no luck =(
A zero length packet returned from recv indicates client on the other end has closed the connection.
Which answers why your subsequent send to the client failed.
http://www.opengroup.org/onlinepubs/009695399/functions/recv.html
If no messages are available to be
received and the peer has performed an
orderly shutdown, recv() shall return
0.
Are you doing anything to impose some form of flow control on your data transmission?
If not then you are probably using up resources which is causing the send to fail.
For example, if you are simply issuing LOTS of WSASend() calls one after the other rather than pacing them based on when they complete then each one will use system resources (non-paged pool and/or lock pages which go towards the 'locked pages limit'). You'll then likely eventually fail with ENOBUFS or similar errors.
What you need to do is build a flow control system that works off of the send completions so that you only ever have a known number of sends outstanding at a time.
See these questions for more detail:
Implement a good performing "to-send" queue with TCP
Limiting TCP sends with a "to-be-sent" queue and other design issues
Finally figured it out.
from Rogers Internet Terms of Service:
Without limitation, you may not use (or allow anyone else to use) our Services to:
(xvi) operate a server in connection with the Services, including, without limitation, >mail, news, file, gopher, telnet, chat, Web, or host configuration servers, multimedia >streamers or multi-user interactive forums;
how lame is that? O_o
good news: server works fine =)
edit- called Rogers. They verified that they are cutting me off, and told me that I need a business account to run a web server.