Gunicorn worker, threads for GPU tasks to increase concurrency/parallelism - concurrency

I'm using Flask with Gunicorn to implement an AI server. The server takes in HTTP requests and calls the algorithm (built with pytorch). The computation is run on the nvidia GPU.
I need some input as to how can I achieve concurrency/parallelism in this case. The machine has 8 vCPUs, 20 GB memory and 1 GPU, 12 GB memory.
1 worker occupies, 4 GB memory, 2.2GB GPU memory.
max workers I can give is 5. (Because of GPU memory 2.2 GB * 5 workers = 11 GB )
1 worker = 1 HTTP request (max simultaneous requests = 5)
The specific question is
How can I increase the concurrency/parallelism?
Do I have to specify number of threads for computation on GPU?
Now my gunicorn command is
gunicorn --bind 0.0.0.0:8002 main:app --timeout 360 --workers=5 --worker-class=gevent --worker-connections=1000

fast Tokenizers are not thread-safe apparently.
AutoTokenizers seems like a wrapper that uses fast or slow internally. their default is set to fast (not thread-safe) .. you'll have to switch that to slow (safe) .. that's why add the use_fast=False flag
I was able to solve this by:
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
Best,
Chirag Sanghvi

Related

How acceptThreads and acceptQueueSize impact concurrent requests?

I am new to Jetty.
We are running into an issue with an application using Jetty 7.4 under a load test. Any requests to the Jetty server gets stuck and there is no response returned, even after several hours. Also, new requests seem to behave the same way, just waiting for a response forever. The thread dump does not show any deadlocks and but does show a good number of threads in TIMED_WAITING state. There are no Jetty exceptions in the logs.
Details:
Jetty version is 7.4
OS: OpenAdopt JDK 8
acceptQueueSize: 100
acceptTimeout: 1800000 (30 mins)
acceptors: 1
lingerTime: 10000 (milli secs)
min number of thread: 5
max number of thread: 500
Concurrent request: 700 (70 APIs and 10 per API)
SelectChannelConnector result = new MySelectChannelConnectorImpl();
result.setThreadPool(createThreadPool(url, config/* min and max threads*/));
result.setPort((url.getPort() > 0) ? url.getPort() : port);
result.setMaxIdleTime((int)acceptTimeout);
result.setAcceptors(acceptThreads);
result.setAcceptQueueSize(acceptQueueSize);
result.setSoLingerTime(linger);
I know Jetty 7.4 is end of life but customer is not in a position to upgrade.
We need to prove to the customer that, issue is with jetty version and will be fixed with jetty upgrade.
Can I tweak the acceptThreads and acceptQueueSize for better results?
I want to understand acceptors threads, selectors, jetty worker thread, acceptQueueSize, acceptTimeout.
Any pointer to learn the concepts.
#Joakim Erdfelt

Aerospike error: All batch queues are full

I am running an Aerospike cluster in Google Cloud. Following the recommendation on this post, I updated to the last version (3.11.1.1) and re-created all servers. In fact, this change cause my 5 servers to operate in a much lower CPU load (it was around 75% load before, now it is on 20%, as show in the graph bellow:
Because of this low load, I decided to reduce the cluster size to 4 servers. When I did this, my application started to receive the following error:
All batch queues are full
I found this discussion about the topic, recommending to change the parameters batch-index-threads and batch-max-unused-buffers with the command
asadm -e "asinfo -v 'set-config:context=service;batch-index-threads=NEW_VALUE'"
I tried many combinations of values (batch-index-threads with 2,4,8,16) and none of them solved the problem, and also changing the batch-index-threads param. Nothing solves my problem. I keep receiving the All batch queues are full error.
Here is my aerospace.conf relevant information:
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
service-threads 32
transaction-queues 32
transaction-threads-per-queue 4
batch-index-threads 40
proto-fd-max 15000
batch-max-requests 30000
replication-fire-and-forget true
}
I use 300GB SSD disks on these servers.
A quick note which may or may not pertain to you:
A common mistake we have seen in the past is that developers decide to use 'batch get' as a general purpose 'get' for single and multiple record requests. The single record get will perform better for single record requests.
It's possible that you are being constrained by the network between the clients and servers. Reducing from 5 to 4 nodes reduced the aggregate pipe. In addition, removing a node will start cluster migrations which adds additional network load.
I would look at the batch-max-buffer-per-queue config parameter.
Maximum number of 128KB response buffers allowed in each batch index
queue. If all batch index queues are full, new batch requests are
rejected.
In conjunction with raising this value from the default of 255 you will want to also raise the batch-max-unused-buffers to batch-index-threads x batch-max-buffer-per-queue + 1 (at least). If you do not do that new buffers will be created and destroyed constantly, as the amount of free (unused) buffers is smaller than the ones you're using. The moment the batch response is served the system will strive to trim the buffers down to the max unused number. You will see this reflected in the batch_index_created_buffers metric constantly rising.
Be aware that you need to have enough DRAM for this. For example if you raise the batch-max-buffer-per-queue to 320 you will consume
40 (`batch-index-threads`) x 320 (`batch-max-buffer-per-queue`) x 128K = 1600MB
For the sake of performance the batch-max-unused-buffers should be set to 13000 which will have a max memory consumption of 1625MB (1.59GB) per-node.

Performance Issue in Executing Shell Commands

In my application, I need to execute large amount of shell commands via c++ code. I found the program takes more than 30 seconds to execute 6000 commands, this is so unacceptable! Is there any other better way to execute shell commands (using c/c++ code)?
//Below functions is used to set rules for
//Linux tool --TC, and in runtime there will
//be more than 6000 rules to be set from shell
//those TC commans are like below example:
//tc qdisc del dev eth0 root
//tc qdisc add dev eth0 root handle 1:0 cbq bandwidth
// 10Mbit avpkt 1000 cell 8
//tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 5 allot 1514
// cell 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:3 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:1 classid 1:1001 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 8 allot 1514 cell
// 8 maxburst 20 avpkt 1000
//......
void CTCProxy::ApplyTCCommands(){
FILE* OutputStream = NULL;
//mTCCommands is a vector<string>
//every string in it is a TC rule
int CmdCount = mTCCommands.size();
for (int i = 0; i < CmdCount; i++){
OutputStream = popen(mTCCommands[i].c_str(), "r");
if (OutputStream){
pclose(OutputStream);
} else {
printf("popen error!\n");
}
}
}
UPDATE
I tried to put all the shell commands into a shell script and let the test app call this script file using system("xxx.sh"). This time it takes 24 seconds to execute all 6000 entries of shell commands, less than what we toke before. But this is still much bigger than what we expected! Is there any other way that can decrease the execution time to less than 10 seconds?
So, most likely (based on my experience in a similar type of thing), the majority of the time is spent starting a new process running a shell, the execution of the actual command in the shell is very short. (And 6000 in 30 seconds doesn't sound too terrible, actually).
There are a variety of ways you could do this. I'd be tempted to try to combine it all into one shell script, rather than running individual lines. This would involve writing all the 'tc' strings to a file, and then passing that to popen().
Another thought is if you can actually combine several strings together into one execute, perhaps?
If the commands are complete and directly executable (that is, no shell is needed to execute the program), you could also do your own fork and exec. This would save creating a shell process, which then creates the actual process.
Also, you may consider running a small number of processes in parallel, which on any modern machine will likely speed things up by the number of processor cores you have.
You can start shell (/bin/sh) and pipe all commands there parsing the output. Or you can create a Makefile as this would give you more control on how the commands whould be executed, parallel execution and error handling.

Django, low requests per second with gunicorn 4 workers

I'm trying to see why my django website (gunicorn 4 workers) is slow under heavy load, I did some profiling http://djangosnippets.org/snippets/186/ without any clear answer so I started some load tests from scratch using ab -n 1000 -c 100 http://localhost:8888/
A simple Httpreponse("hello world") no middleware ==> 3600req/s
A simple Httpreponse("hello world") with middlewares (cached session, cached authentication) ==> 2300req/s
A simple render_to_response that only print a form (cached template) ==> 1200req/s (response time was divided by 2)
A simple render_to_response with 50 memcache queries ==> 157req/s
Memcache queries should be much faster than that (I'm using PyLibMCCache)?
Is the template rendering as slow as this result?
I tried different profiling technics without any success.
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 46936
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 400000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 46936
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
$ sysctl -p
fs.file-max = 700000
net.core.somaxconn = 5000
net.ipv4.tcp_keepalive_intvl = 30
I'm using ubuntu 12.04 (6Go of ram, core i5)
Any help please?
It really depends on how long it takes to do a memcached request and to open a new connection (django closes the connection as the request finishes), both your worker and memcached are able to handle much more stress but of course if it takes 5/10ms to do a memcached call then 50 of them are going to be the bottleneck as you have the network latency multiplied by call count.
Right now you are just benchmarking django, gunicorn, your machine and your network.
Unless you have something extremely wrong at this level this tests are not going to point you to very interesting discoveries.
What is slowing doing your app is very likely to be related to the way you use your db and memcached (and maybe at template rendering).
For this reason I really suggest you to get django debug toolbar and to see whats happening in your real pages.
If it turns out that opening a connection to memcached is the bottleneck you can try to use a connection pool and keep the connection open.
You could investigate memcached performance.
$ python manage.py shell
>>> from django.core.cache import cache
>>> cache.set("unique_key_name_12345", "some value with a size representative of the real world memcached usage", timeout=3600)
>>> from datetime import datetime
>>> def how_long(n):
start = datetime.utcnow()
for _ in xrange(n):
cache.get("unique_key_name_12345")
return (datetime.utcnow() - start).total_seconds()
With this kind of round-trip test I am seeing that 1 memcached lookup will take about 0.2 ms on my server.
The problem with django.core.cache and pylibmc is that the functions are blocking. Potentially you could get 50 times that number in the round trip for HTTP request. 50 times 0.2 ms is already 10 ms.
If you were achieving 1200 req/s on 4 workers without memcached, the average HTTP round-trip time was 1/(1200/4) = 3.33 ms. Add 10 ms to that and it becomes 13.33 ms. The throughput with 4 workers would drop to 300 req/s (whuch happens to be in the ballpark of your 157 number).

django in webfaction & apache memory load

I am developing a new application. I am still in development stage. However whenever I restart apache my application gets a 140MB memory cap. Whereas my other (older and more complex) application gets a 40MB. This results in webfaction sending me messages about memory usage. Apache by default starts with 2 processes resulting in a more than 300MB memory usage. I changed this to 1 process like this:
MaxSpareThreads 3
MinSpareThreads 1
ServerLimit 1
SetEnvIf X-Forwarded-SSL on HTTPS=1
ThreadsPerChild 5
WSGIDaemonProcess tipleaders processes=1 threads=6 python-path=/home/<<USERNAME>>/webapps/<<WEBSITE>>:/home/<<USERNAME>>/webapps/<<WEBSITE>>/lib/python2.7:/home/<<USERNAME>>/webapps/<<WEBSITE>>/<<WEBSITE>> maximum-requests=10
Memory usage does not increase with every request (so I guess it is not a memory leak problem).
It just starts with a very high memory cap (~150MB).
Any ideas what shall I do?
PS: this are my main imports in views.py http://dpaste.com/744785/
some other imports are here: http://dpaste.com/744786/
models.py, urls.py and settings.py are here: http://dpaste.com/744787/
EDIT
PS2: My whole site is using SSL
EDIT
as per request, the project is not dealing with media. No images, no videos. It is a simple website which parses 2 xml files with matches (events and results) and displays them to the user. No ads to the site (just one from the affiliator). No big images whatsoever. The site at the dev stage has only 10 users, and the sporting events that have been inserted in the database are not more than 5000.
EDIT:
I installed django-devserver (https://github.com/dcramer/django-devserver)
and this is what I get:
>python manage.py runserver
[profile] heap size is 7.9 MB
[profile] heap size is 7.9 MB
[sql] (219ms) 2 queries with 0 duplicates
[profile] Total time to render was 0.14s
[profile] 5.3 MB allocated, 13.0 KB deallocated, heap size is 13.3 MB
[sql] (219ms) 2 queries with 0 duplicates
[profile] Total time to render was 1.08s
[profile] 404.8 KB allocated, 18.6 KB deallocated, heap size is 13.7 MB
[12/May/2012 12:42:38] "GET / HTTP/1.1" 200 146587 (time: 6.93s; sql: 219ms (2q)
I am still puzzled on why Apache starts with 140MB allocated for my application.