Notify all goroutines - concurrency

I am working on a TCP Server in Go. Now I want to notify all goroutines that are talking to clients to drop their connections, dump what they've got and stop.
Closing a channel is a way to notify all of them.
Question is: Is that idiomatic Go? If I am wrong; then what should I do (for notifying all of goroutines - something like ManualResetEvent in .NET)?
Note: I am a Go newbie, just learning and started with TCP Server because I have written that before in C#.

Yes, closing a channel is an idiomatic Go way of communicating between Goroutines.
You'll need to pass a channel into each goroutine as it launches and check the channel with the select call after each network event.
You'll also want to set timeouts on network events so that you don't have connections hanging around forever.

As of Go 1.7, you can use the context package.

Related

How can I run a background-task in django and when it's done, I can push information to front end.

Since one of my tasks in views.py is time-consuming, so I think I'd better put it in the background. But also, I want to make sure when this task finished, I'll receive something in the front end. How can I achieve this? I've searched and found django-channels, but still, I can't combine the two goals together. Hope someone will help me.
You basically have 2 options:
Either you have your client request the status of your long-running task regularly and respond accordingly when it is done.
Or you use sockets between your client and server and inform your client via the socket, when the task is finished. One of the recommend options for sockets is django-channels. Is there anything wrong with it?
Always run background tasks using Asynchronous task processing like
CELERY
DJANGO BACKGROUND TASKS (https://github.com/arteria/django-background-tasks)
For pushing notification use
Django-channels(websockets)
Django-webpush https://github.com/safwanrahman/django-webpush
Polling
Tornado(long opening connection) or StreamingHttpResponse in Django can solve also
if you think websockets are difficult for you to maintain go for polling.

Design a multi client - server application, where client send messages infrequent

I have to design a server which can able to send a same objects to many clients. clients may send some request to the server if it wants to update something in the database.
Things which are confusing:
My server should start the program (where I perform some operation and produce 'results' , this will be send to the client).
My server should listen to the incoming connection from the client, if any it should accept and start sending the ‘results’.
Server should accept as many clients as possible (Not more than 100).
My ‘result' should be secured. I don’t want some one take my ‘result' and see what my program logics look like.
I thought point 1. is one thread. And point 2. is another thread and it going to create multiple threads within its scope to serve point 3. Point 4 should be taken by my application logic while serialising the 'result' rather the server.
Is it a bad idea? If so where can i improve?
Thanks
Putting every connection on a thread is very bad, and is apparently a common mistake that beginners do. Every thread costs about 1 MB of memory, and this will overkill your program for no good reason. I did ask the very same question before, and I got a very good answer. I used boost ASIO, and the server/client project is finished since months, and it's a running project now beautifully.
If you use C++ and SSL (to secure your connection), no one will see your logic, since your programs are compiled. But you have to write your own communication protocol/serialization in that case.

ZeroMQ sending many to one

I have implemented a zmq library using push / pull on windows. There is a server and up to 64 clients running over loopback. Each client can send and receive to the server. There is a thread that waits for each client to connect on a pull zmq socket. Clients can go away at any time.
The server is expected to go down at times and when it comes back up the clients need to reconnect to it.
The problem is that when nothing is connected I have 64 receive threads waiting for a connection. This shows up as a lot of connections in tcpview and my colleagues inform me that this is appearing like a performance/d-dos sort of attack.
So in order to get around that issue I'd like the clients to send some sort of heart beat to the server "hey I'm here" on one socket. However I can't see how to do that with zmq.
Any help would be appreciated.
I think the basic design of having 64 threads on the server waiting for external connections is flawed. Why not have a single 'master' thread binding the socket, which the external clients would connect to?
Internal to the server, you could still have 64 worker threads. Work would be distributed to the worker threads by the master thread. The communication between the master and the worker threads would be using zmq messages over the inproc transport.
What I have described are simple fan-in and fan-out patterns which are covered in the zmq guide. If you adopt this, most of the zmq code in the clients and workers would remain unchanged. You would have to write code for the master thread, but the zproxy class of CZMQ may work well for you (if you're using CZMQ).
So my advice is to get the basic design right before trying to add heartbeats. [Actually, I'm not sure how heartbeats would help your current problem.]

how synchronize recv() when multithreading cpp CRT

I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.

HTTP stream server: threads?

I already wrote here about the http chat server I want to create: Alternative http port?
This http server should stream text to every user in the same chat room on the website. The browser will stay connected and wait for further html code. (yes that works, the browser won't reject the connection).
I got a new question: Because this chat server doesn't need to receive information from the client, it's not necessary to listen to the client after the server sent its first response. New chat messages will be send to the server on a new connection.
So I can open 2 threads, one waiting for new clients (or new messages) and one for the html streaming.
Is this a good idea or should I use one thread per client? I don't think it's good to have one thread/client when there are many chat users online, since the server should handle multiple different chats with their own rooms.
3 posibilities:
1. One thread for all clients, send text to each client successive - there shouldn't be much lag since it's only text
this will be like: user1.send("text");user2.send("text"),...
2. One thread per chat or chatroom
3. One thread per chat user - ... many...
Thank you, I haven't done much with sockets yet ;).
Right now, you seem to be thinking in terms of a given thread always carrying out a given (type of) task. While that basic design can make sense, to produce a scalable server like this, it generally doesn't work very well.
Often a slightly more abstract viewpoint works out better: you have tasks that need to get done, and threads that do those tasks -- but a thread doesn't really "care" about what task it executes.
With this viewpoint, you simply need to create some sort of data structure that describes each task that needs to be done. When you have a task you want done, you fill in a data structure to describe the task, and hand it off to get done. Somewhere, there are some threads that do the tasks.
In this case, the exact number of threads becomes mostly irrelevant -- it's something you can (and do) adjust to fit the number of CPU cores available, the type of tasks, and so on, not something that affects the basic design of the program.
I think easiest pattern for this simple app is to have pool of threads and then for each client pick available thread or make it wait until one becomes available.
If you want serious understanding of http server architecture concepts google following:
apache architecture
nginx architecture