Does the grpc async server has a better performance compared to the sync server with a restricted number of threads? - c++

I need to implement a server to which multiple clients can send requests simultaneously. The code processing an individual request might block (with the thread going to sleep) in the middle.
At the moment I am using the C++ GRPC synchronised server. Each time a client sends a request, a new thread is spawned on the server's side. This is a problem since a server can create too many threads simultaneously.
I am considering two solutions to avoid the problem:
1) Use the sync server with the ResourceQuota (e.g. restrict the max number of threads to 10).
2) Use the async server.
Implementing the second solution is considerably more difficult than implementing the first solution. What advantage (if any) the second solution would give compared to the first one? Which solution would give better results in terms of:
The amount of time an individual client needs to wait to get a response to RPC
The resources (memory, threads) used on the server.

Related

I don't understand how to async operations can make a HTTP server concurrent

I am developing a HTTP server using boost asio. So far, I have been using async operations (aync_read, async_write etc.), but I want to make my server concurrent, that is, the same as a server that creates a new thread per each new client connected.
I have read some forums etc. and, apparently, a concurrent server can be made only by using the mentioned async operations. I do not understand how is this possible.
I mean, taking into account that the async operations' handlers are executed in the thread that called to io_service.run(), lets take that a client is being responsed at this moment. How can another client make a petition and been answered while the main thread is busy with the first client?
The meaning of the word "concurrent" is ambiguous.
You are right, an asynchronous server is not concurrent at all. It can process only one request at a time. But the key insight is that what most servers do is actually they take a request, do some light processing (parsing, serialization, validation, some light business logic, etc.) and then call external resources (e.g. some database). The server can then process other requests while waiting for the external resource. So it's only an illusion of being concurrent (processing happens one after another but really fast). And it works as long as the processing is relatively fast compared to io.
If your server is supposed to do some hard cpu computations then obviously there will be no concurrency at all. In that case the only way to make it concurrent is to add threads or processes (possibly on multiple machines).
Asynchronous IO does not make the server concurrent.
In fact, Asynchronous IO does not mean "multi-threaded" or "multi-processed" at all. Node.js servers are mono-threaded and using asynchronous IO.
Asynchronous IO just means your thread does not wait for the IO to finish, but does other stuff meanwhile (like accepting and processing new incoming requests).
So no, the premise that Asynchronous IO makes the server concurrent is wrong. it does not make it concurrent, it makes it scalable, as thread-per-request is not so scalable, but a proper thread-pool + event queue/coroutines are. the threads only deal with CPU bound tasks and the event queue/coroutines manages enqueuing and dequeuing started/finished IO operations.
Not sure if you're only looking for a theoretical answer or a design example, but have you seen the HTTP Server 3 example for boost.asio?
Concurrency is achieved by having a small thread pool to execute the work. When callbacks need to be handled, all threads calling io_service.run() can be chosen to execute the task.

threading=True with flask-socketio

I have been using flask, and some of my route handlers start computations that can take several minutes to complete. Using flask's development server, I can use app.run(threaded=True) and my server will continue to respond to other requests while it's off performing these multi-minute computation.
Now I've starting using Flask-SocketIO and I'm not sure how to do the equivalent thing. I understand that I can explicitly spawn a separate thread in python any time it starts one of these computations. Is that the only way to do it? Or is there something equivalent to threaded=True for flask-socketio. (Or, more likely, am I just utterly confused.)
Thanks for any help.
The idea of the threaded mode in Flask/Werkzeug is to enable the development server to handle multiple requests concurrently. In the default mode, the server can handle one request at a time, if a client sends a request while the server is already processing a previous request, then the second request has to wait until that first request is complete. In threaded mode, Werkzeug spawns a thread for each incoming request, so multiple requests are handled concurrently. You obviously are taking advantage of the threaded mode to have requests that take very long to return, while keeping the server responsive to other requests.
Note that this approach is hard to scale properly when you move out of the development web server and into a production web server. For a worker based server you have to pick a fixed number of workers, and that gives you the maximum number of concurrent requests you can have.
The alternative approach is to use a coroutine based server, such as gevent, which is fully supported by Flask. For gevent there is a single worker process, but in it there are multiple lightweight (or "green") threads, that cooperatively allow each other to run. The key to make things work under this model is to ensure that these green threads do not abuse the CPU time they get, because only one can run at a time. When this is done right, the server can scale much better than with the multiple worker approach I described above, and you can easily have hundreds/thousands of clients handled in this fashion.
So now you want to use Flask-SocketIO, and this extension requires the use of gevent. In case the reason for this requirement isn't clear, unlike HTTP requests, SocketIO uses the WebSocket protocol, which requires long-lived connections. Using gevent and green threads makes it possible to have a potentially large number of constantly connected clients, something that would be impossible to do with multiple workers.
The problem is your long calculation, which is not friendly to the gevent type of server. To make it work, you need to ensure your calculation function yields often, so that other threads get a chance to run and don't starve. For example, if your calculation function has a loop in in, you can do something like this:
def my_long_calculation():
while some_condition:
# do some work here
# let other threads run
gevent.sleep()
The sleep() function will basically halt your thread and switch to any other threads that need CPU. Eventually control will be given back to your function, and at that point it'll move on to the next iteration. You need to make sure the sleep calls are not too spaced out (as that will make the rest of the application unresponsive) or not too closer (as that may slow down your calculation).
So to answer your question, as long as you yield properly in your long calculation, you do not need to do anything special to handle concurrent requests, as this is the normal operating mode of gevent.
If for any reason the yield approach is not possible, then you may need to think about offloading the CPU intensive tasks to another process. Maybe use Celery to have these done as a job queue.
Sorry for the long winded answer. Hope this helps!

Connecting to remote services from multiple threaded requests

I have a boost asio application with many threads, similar to a web server, handling hundreds of concurrent requests. Every request will need to make calls to both memcached and redis (via libmemcached and redispp respectively). Is the best practice in this situation to make a separate connection to both redis and memcached from each thread (effectively tripling the open sockets on the server, three per request)? Or is there a way for me to build a static object, with a single memcached/redis connection, and allow all threads to share that single connection? I'm a bit confused when it comes to the thread safety of something like this, and everything needs to be asynchronous between the threads, but blocking for each thread's individual request (so each thread has a linear progression, but many threads can be in different places in their own progression at any given time). Does that make sense?
Thanks so much!
Since memcached have syncronous protocol you should not write next request before you got answer to prevous. So, no other thread can chat in same memcached connection. I'd prefer to make thread-local connection if you work with it in "blocking" mode.
Or you can make it work in "async" manner: make pool of connections, pick a connection from it (and lock it). After request is done, return it to pool.
Also, you can make a request queue and process it in special thread(s) (using multigets and callbacks).

How to design a client server architect

I like to know the server (TCP based) architecture to support large scale of clients(at least10K) to implement Fix server. My points are
How we design it.
How to listen on the open port? Use select or poll or any other function.
How to process the response of the client? On large scale we cannot create the one thread for each client.
Should the processing of response is in the different executable and share the request and response to the server executable through IPC.
There is much more on it. I would appreciate if anyone explains it or provide any link.
Thanks
An excellent resource for information on this topic is The C10K problem. Although the dimensions there seem a little old, the techniques are still applicable today.
The architecture depends on what you want to do with the clients incoming data. My guess is that for every incoming message you would perform some computations and probably also return a response.
In that case I would create 1 main listener thread that receives all the incoming messages (Actually, if your hardware has more than 1 physical network device, I would use a listener thread per device and make sure each one is listening to a specific device).
Get the number of CPUs that you have on your machine and create worker threads for each CPU and bind them each thread to one cpu (Maybe number of working thread should be num_of_cpu-1, to leave an availalbe cpu for the listener and dispatcher).
Each thread has a queue and semaphore, the main listener thread just push the incoming data into those queues. There are many way to perform load balancing (Will talk about it later).
Each working thread just works on the requests given to it, and put the response on another queue that is read by the dispatcher.
The dispatcher - there are 2 options here, use a thread for dispatcher (or thread per network device as for listeners), or have the dispatcher actually be the same thread as the listener.
There is some advantage to put them both on the same thread, since it makes it easier to detect lost socket connection and use the same fds for both reading and writing without thread synchronization. However, it could be that using 2 different threads would give better performance, it need to be tested.
Note about load balancing:
This is a topic of its own.
The simplest thing is to use 1 queue for all working threads, but the problem is that they have to lock in order to pop items and the locking can damage performance. (But you get the most balanced load).
Another quite simple approach would be to have a private queue for every worker and perform round-robin when inserting. After every X cycles check the size of all the queues. If some queues are much larger than others then leave them out for the next X cycles and then recheck them again. This is not the best approach, but a simple one to implement and gives some load balancing while no locking is needed.
By the way - There is a way to implement queue between 2 threads without blocking - but this is also another topic.
I hope it helps,
Guy
If the client and server are on a secure network then the security aspect is to be minimal - to the extent that the transfers are encrypted. If the clients and the server are not on a secure network - you first want the server and client to authenticate each other and then initiate encrypted data transfer. For data transfer, server-side authentication should suffice. At the end of this authentication use the session key to generate encrypted data stream (symmetric). consider using TFTP it is simple to implement and scales reasonably well.

HTTP stream server: threads?

I already wrote here about the http chat server I want to create: Alternative http port?
This http server should stream text to every user in the same chat room on the website. The browser will stay connected and wait for further html code. (yes that works, the browser won't reject the connection).
I got a new question: Because this chat server doesn't need to receive information from the client, it's not necessary to listen to the client after the server sent its first response. New chat messages will be send to the server on a new connection.
So I can open 2 threads, one waiting for new clients (or new messages) and one for the html streaming.
Is this a good idea or should I use one thread per client? I don't think it's good to have one thread/client when there are many chat users online, since the server should handle multiple different chats with their own rooms.
3 posibilities:
1. One thread for all clients, send text to each client successive - there shouldn't be much lag since it's only text
this will be like: user1.send("text");user2.send("text"),...
2. One thread per chat or chatroom
3. One thread per chat user - ... many...
Thank you, I haven't done much with sockets yet ;).
Right now, you seem to be thinking in terms of a given thread always carrying out a given (type of) task. While that basic design can make sense, to produce a scalable server like this, it generally doesn't work very well.
Often a slightly more abstract viewpoint works out better: you have tasks that need to get done, and threads that do those tasks -- but a thread doesn't really "care" about what task it executes.
With this viewpoint, you simply need to create some sort of data structure that describes each task that needs to be done. When you have a task you want done, you fill in a data structure to describe the task, and hand it off to get done. Somewhere, there are some threads that do the tasks.
In this case, the exact number of threads becomes mostly irrelevant -- it's something you can (and do) adjust to fit the number of CPU cores available, the type of tasks, and so on, not something that affects the basic design of the program.
I think easiest pattern for this simple app is to have pool of threads and then for each client pick available thread or make it wait until one becomes available.
If you want serious understanding of http server architecture concepts google following:
apache architecture
nginx architecture