So recently I was playing around with Django on the Jython platform and wanted to see its performance in "production". The site I tested with was just a simple return HttpResponse("Time %.2f" % time.time()) view, so no database involved.
I tried the following two combinations (measurements done with ab -c15 -n500 -k <url>, everything in Ubuntu Server 10.10 on VirtualBox):
J2EE application server (Tomcat/Glassfish), deployed WAR file
I get results like
Requests per second: 143.50 [#/sec] (mean)
[...]
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 16
80% 16
90% 31
95% 31
98% 641
99% 3219
100% 3219 (longest request)
Obviously, the server hangs for a few seconds once in a while, which is not acceptable. I assume it has something to do with reloading Jython because starting the jython shell takes about 3 seconds, too.
AJP serving using patched flup package (+ Apache as frontend)
Note: flup is the package used by manage.py runfcgi, I had to patch it because flup's threading/forking support doesn't seem to work on Jython (-> AJP was the only working method).
Almost the same results here, but sometimes the last 100 requests don't even get answered at all (but server process still alive).
I'm asking this on SO (instead of serverfault) because it's very Django/Jython-specific. Does anyone have experience with deploying Django sites on Jython? Is there maybe another (faster) way to serve the site? Or is it just too early to use Django on the Java platform?
So as nobody replied, I investigated a bit more and it seems like my problem might have to do with VirtualBox. Using different server OSes (Debian Squeeze, Ubuntu Server), I had similar problems. For example, with simple static file serving, I got this result from the Apache web server (on Debian):
> ab -c50 -n1000 http://ip.of.my.vm/some/static/file.css
Requests per second: 91.95 [#/sec] (mean) <--- quite impossible for static serving
[...]
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 22.1 0 688
Processing: 0 206 991.4 31 9188
Waiting: 0 96 401.2 16 3031
Total: 0 208 991.7 31 9203
Percentage of the requests served within a certain time (ms)
50% 31
66% 47
75% 63
80% 78
90% 156
95% 781
98% 844
99% 9141 <--- !!!!!!
100% 9203 (longest request)
This led to the conclusion that (I don't have a conclusion, but) I think the Java reloading might not be the problem here, rather the virtualization. I will try it on a real host and leave this question unanswered till then.
FOLLOWUP
Now I successfully tested a bare-bones Django site (really just the welcome page) using Jython + AJP over TCP/mod_proxy_ajp on Apache (again with patched flup package). This time on a real host (i7 920, 6 GB RAM). The result proved that my above assumption was correct and that I really should never benchmark on a virtual host again. Here's the result for the welcome page:
Document Path: /jython-test/
Document Length: 2059 bytes
Concurrency Level: 40
Time taken for tests: 24.688 seconds
Complete requests: 20000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 43640000 bytes
HTML transferred: 41180000 bytes
Requests per second: 810.11 [#/sec] (mean)
Time per request: 49.376 [ms] (mean)
Time per request: 1.234 [ms] (mean, across all concurrent requests)
Transfer rate: 1726.23 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.5 0 20
Processing: 2 49 16.5 44 255
Waiting: 0 48 16.5 44 255
Total: 2 49 16.5 45 256
Percentage of the requests served within a certain time (ms)
50% 45
66% 48
75% 51
80% 53
90% 69
95% 80
98% 90
99% 97
100% 256 (longest request) # <-- no multiple seconds of waiting anymore
Very promising, I would say. The only downside is that the average request time is > 40 ms whereas the development server has a mean of < 3 ms.
Related
We have build Ejabberd in AWS EC2 instance and have enabled the clustering in the 6 Ejabberd servers in Tokyo, Frankfurt, and Singapore regions.
The OS, middleware, applications and settings for each EC2 instance are exactly the same.
But currently, the Ejabberd CPUs in the Frankfurt and Singapore regions are overloaded.
The CPU of Ejabberd in the Japan region is normal.
Could you please tell me the suspicious part?
You can take a look at the ejabberd log files of the problematic (and the good) nodes, maybe you find some clue.
You can use the undocumented "ejabberdctl etop" shell command in the problematic nodes. It's similar to "top", but runs inside the erlang virtual machine that runs ejabberd
ejabberdctl etop
========================================================================================
ejabberd#localhost 16:00:12
Load: cpu 0 Memory: total 44174 binary 1320
procs 277 processes 5667 code 20489
runq 1 atom 984 ets 5467
Pid Name or Initial Func Time Reds Memory MsgQ Current Function
----------------------------------------------------------------------------------------
<9135.1252.0> caps_requests_cache 2393 1 2816 0 gen_server:loop/7
<9135.932.0> mnesia_recover 480 39 2816 0 gen_server:loop/7
<9135.1118.0> dets:init/2 71 2 5944 0 dets:open_file_loop2
<9135.6.0> prim_file:start/0 63 1 2608 0 prim_file:helper_loo
<9135.1164.0> dets:init/2 56 2 4072 0 dets:open_file_loop2
<9135.818.0> disk_log:init/2 49 2 5984 0 disk_log:loop/1
<9135.1038.0> ejabberd_listener:in 31 2 2840 0 prim_inet:accept0/3
<9135.1213.0> dets:init/2 31 2 5944 0 dets:open_file_loop2
<9135.1255.0> dets:init/2 30 2 5944 0 dets:open_file_loop2
<9135.0.0> init 28 1 3912 0 init:loop/1
========================================================================================
Xoogler in the cloud here. I have a very low qps service that serves HTML plus the follow-up resources. So it typically sits idle and then receives something in the order of 20 requests over 5s with concurrency well below 10, where concurrency limit is 80. I observe that clients regularly receive 429s from Cloud Run, typically after periods of service inactivity, even though an instance is still up (so it's not a cold-start problem). This can either be on the first request but often somewhere in the middle of the sequence (i.e. icons, css don't load).
The instance is concurrent, responsive and could easily handle the load, but Cloud Run doesn't let it. No other instances are spun up either, although we're not even at the max of 2. This suggests that Cloud Run for some reason estimates >2 instances needed?
Here's a typical request sequence, redacted from the logs:
... 20 min idle ...
I 2020-03-27T18:21:27.619317Z GET 307 288 B 5 ms
I 2020-03-27T18:21:27.706580Z GET 302 0 B 0 ms
I 2020-03-27T18:21:27.760271Z GET 200 5.83 KiB 5 ms
I 2020-03-27T18:21:27.838066Z GET 200 1.89 KiB 4 ms
I 2020-03-27T18:21:27.882751Z GET 200 1.05 KiB 4 ms
I 2020-03-27T18:21:27.886743Z GET 200 582 B 3 ms
I 2020-03-27T18:21:27.893060Z GET 200 533 B 4 ms
I 2020-03-27T18:21:27.897352Z GET 200 5.35 KiB 4 ms
I 2020-03-27T18:21:27.899086Z GET 200 11.38 KiB 6 ms
I 2020-03-27T18:21:27.905967Z GET 200 22.48 KiB 13 ms
I 2020-03-27T18:21:27.906113Z GET 200 592 B 13 ms
I 2020-03-27T18:21:27.907967Z GET 200 35.08 KiB 14 ms
...500ms...
I 2020-03-27T18:21:28.434846Z GET 200 2.76 MiB 50 ms
I 2020-03-27T18:21:28.465552Z GET 200 2.29 MiB 67 ms <= up to here all resources served from image
...2500ms...
I 2020-03-27T18:21:31.086943Z GET 200 2.95 KiB 706 ms <= IO-bound, talking to backend api
...1600ms...
W 2020-03-27T18:21:32.674973Z GET 429 14 B 0 ms <= !!!
W 2020-03-27T18:21:32.675864Z GET 429 14 B 0 ms <= !!!
W 2020-03-27T18:21:32.676292Z GET 429 14 B 0 ms <= !!!
I 2020-03-27T18:21:32.684265Z GET 200 547 B 6 ms
I 2020-03-27T18:21:32.686695Z GET 200 504 B 9 ms
I 2020-03-27T18:21:32.690580Z GET 200 486 B 12 ms
Conceivably that last group of requests are 6 parallel requests. Why would three be denied and three served? The service is way under capacity. A couple of reloads typically solve the issue.
It really appears to me as if the algorithm vastly overestimates the required resources after a period of inactivity. I'm happy to try a larger max-instances (redeployed to 10 now) but something really seems off with the estimates on the low end of the spectrum. If "2" as a concurrency setting is below what the platform supports, gcloud probably should probably enforce a higher minimum in the first place.
This is somewhat sad as it impacts people just "trying out" Cloud Run and they observe intermittent errors (partially rendered pages, ...) - which are even pinned on the client (4xx) who is certainly not at fault.
Happy to provide more data.
Configuration:
template:
metadata:
...
annotations:
...
autoscaling.knative.dev/maxScale: '2'
spec:
timeoutSeconds: 900
...
containerConcurrency: 80
containers:
...
resources:
limits:
cpu: 1000m
memory: 244Mi
This looks like a known issue with Cloud Run, I would recommend starring it to receive notifications and expedite resolution.
Parity doesn't seem to have any documentation on what it's console output means. At least none that I've found which admittedly doesn't mean a whole lot. Can anyone give me a breakdown of the meaning of the following line?
2018-03-09 00:05:12 UTC Syncing #4896969 61ee…bdad 2 blk/s 508 tx/s 16 Mgas/s 645+ 1 Qed #4897616 17/25 peers 4 MiB chain 135 MiB db 42 MiB queue 5 MiB sync RPC: 0 conn, 0 req/s, 182 µs
Thanks.
Why document when you can just read code? (bleh)
2018-03-09 00:05:12 UTC(1) Syncing #4896969(2) 61ee…bdad(3) 2 blk/s(4) 508 tx/s(5) 16 Mgas/s(6) 645+(7) 1(8) Qed #4897616(9) 17/25 peers(10) 4 MiB chain(11) 135 MiB db(12) 42 MiB queue(13) 5 MiB sync(14) RPC: 0 conn(15), 0 req/s(16), 182 µs(17)
Timestamp
Best block number (latest verified block number)
Best block hash
Blocks downloaded per second
Transactions downloaded per second
Millions of gas processed per second
Unverified queue size
Verified queue size
Latest block number
Number of active peer nodes/number of total peer nodes
Blockchain header cache size
Blockchain state cache size
Queue cache size
Node sync metadata cache size
Number of open RPC sessions to your node
RPC requests per second
Approximate roundtrip ping
Now the answer to this question is also included in Parity's FAQ. It provides a comprehensive explanation of different command line output:
What does Parity's command line output mean?
I'm working on an HTTP service that serves big files. I noticed that parallel downloads are not possible. The process serves only one file at a time and all other downloads are waiting until the previous downloads finish. How can I stream multiple files at the same time?
require "http/server"
server = HTTP::Server.new(3000) do |context|
context.response.content_type = "application/data"
f = File.open "bigfile.bin", "r"
IO.copy f, context.response.output
end
puts "Listening on http://127.0.0.1:3000"
server.listen
Request one file at a time:
$ ab -n 10 -c 1 127.0.0.1:3000/
[...]
Percentage of the requests served within a certain time (ms)
50% 9
66% 9
75% 9
80% 9
90% 9
95% 9
98% 9
99% 9
100% 9 (longest request)
Request 10 files at once:
$ ab -n 10 -c 10 127.0.0.1:3000/
[...]
Percentage of the requests served within a certain time (ms)
50% 52
66% 57
75% 64
80% 69
90% 73
95% 73
98% 73
99% 73
100% 73 (longest request)
The problem here is that both File#read and context.response.output will never block. Crystal's concurrency model is based on cooperatively scheduled fibers, where switching fibers only happens when IO blocks. Reading from the disk using nonblocking IO is impossible which means the only part that's possible to block is writing to context.response.output. However, disk IO is a lot lot slower than network IO on the same machine, meaning that writing will never block because ab is reading at a rate much faster than the disk can provide data, even from the disk cache. This example is practically the perfect storm to break crystal's concurrency.
In the real world, it's much more likely that clients of the service will reside over the network from the machine, making the response write occasionally block. Furthermore, if you were reading from another network service or a pipe/socket you would also block. Another solution would be to use a threadpool to implement nonblocking file IO, which is what libuv does. As a side note, Crystal moved to libevent because libuv doesn't allow a multithreaded event loop (i.e. have any thread resume any fiber).
Calling Fiber.yield to pass execution to any pending fiber is the correct solution. Here's an example of how to block (and yield) while reading files:
def copy_in_chunks(input, output, chunk_size = 4096)
size = 1
while size > 0
size = IO.copy(input, output, chunk_size)
Fiber.yield
end
end
File.open("bigfile.bin", "r") do |file|
copy_in_chunks(file, context.response)
end
This is a transcription of the dicussion here: https://github.com/crystal-lang/crystal/issues/4628
Props to GitHub users #cschlack, #RX14 and #ysbaddaden
I trying to get started with Google Perf Tools to profile some CPU intensive applications. It's a statistical calculation that dumps each step to a file using `ofstream'. I'm not a C++ expert so I'm having troubling finding the bottleneck. My first pass gives results:
Total: 857 samples
357 41.7% 41.7% 357 41.7% _write$UNIX2003
134 15.6% 57.3% 134 15.6% _exp$fenv_access_off
109 12.7% 70.0% 276 32.2% scythe::dnorm
103 12.0% 82.0% 103 12.0% _log$fenv_access_off
58 6.8% 88.8% 58 6.8% scythe::const_matrix_forward_iterator::operator*
37 4.3% 93.1% 37 4.3% scythe::matrix_forward_iterator::operator*
15 1.8% 94.9% 47 5.5% std::transform
13 1.5% 96.4% 486 56.7% SliceStep::DoStep
10 1.2% 97.5% 10 1.2% 0x0002726c
5 0.6% 98.1% 5 0.6% 0x000271c7
5 0.6% 98.7% 5 0.6% _write$NOCANCEL$UNIX2003
This is surprising, since all the real calculation occurs in SliceStep::DoStep. The "_write$UNIX2003" (where can I find out what this is?) appears to be coming from writing the output file. Now, what confuses me is that if I comment out all the outfile << "text" statements and run pprof, 95% is in SliceStep::DoStep and `_write$UNIX2003' goes away. However my application does not speed up, as measured by total time. The whole thing speeds up less than 1 percent.
What am I missing?
Added:
The pprof output without the outfile << statements is:
Total: 790 samples
205 25.9% 25.9% 205 25.9% _exp$fenv_access_off
170 21.5% 47.5% 170 21.5% _log$fenv_access_off
162 20.5% 68.0% 437 55.3% scythe::dnorm
83 10.5% 78.5% 83 10.5% scythe::const_matrix_forward_iterator::operator*
70 8.9% 87.3% 70 8.9% scythe::matrix_forward_iterator::operator*
28 3.5% 90.9% 78 9.9% std::transform
26 3.3% 94.2% 26 3.3% 0x00027262
12 1.5% 95.7% 12 1.5% _write$NOCANCEL$UNIX2003
11 1.4% 97.1% 764 96.7% SliceStep::DoStep
9 1.1% 98.2% 9 1.1% 0x00027253
6 0.8% 99.0% 6 0.8% 0x000274a6
This looks like what I'd expect, except I see no visible increase in performance (.1 second on a 10 second calculation). The code is essentially:
ofstream outfile("out.txt");
for loop:
SliceStep::DoStep()
outfile << 'result'
outfile.close()
Update: I timing using boost::timer, starting where the profiler starts and ending where it ends. I do not use threads or anything fancy.
From my comments:
The numbers you get from your profiler say, that the program should be around 40% faster without the print statements.
The runtime, however, stays nearly the same.
Obviously one of the measurements must be wrong. That means you have to do more and better measurements.
First I suggest starting with another easy tool: the time command. This should get you a rough idea where your time is spend.
If the results are still not conclusive you need a better testcase:
Use a larger problem
Do a warmup before measuring. Do some loops and start any measurement afterwards (in the same process).
Tiristan: It's all in user. What I'm doing is pretty simple, I think... Does the fact that the file is open the whole time mean anything?
That means the profiler is wrong.
Printing 100000 lines to the console using python results in something like:
for i in xrange(100000):
print i
To console:
time python print.py
[...]
real 0m2.370s
user 0m0.156s
sys 0m0.232s
Versus:
time python test.py > /dev/null
real 0m0.133s
user 0m0.116s
sys 0m0.008s
My point is:
Your internal measurements and time show you do not gain anything from disabling output. Google Perf Tools says you should. Who's wrong?
_write$UNIX2003 is probably referring to the write POSIX system call, which outputs to the terminal. I/O is very slow compared to almost anything else, so it makes sense that your program is spending a lot of time there if you are writing a fair bit of output.
I'm not sure why your program wouldn't speed up when you remove the output, but I can't really make a guess on only the information you've given. It would be nice to see some of the code, or even the perftools output when the cout statement is removed.
Google perftools collects samples of the call stack, so what you need is to get some visibility into those.
According to the doc, you can display the call graph at statement or address granularity. That should tell you what you need to know.