Nginx + Gunicorn + Django high latency - django

I'm trying to adjust my WS to support ~ 20k concurrent users.
No matter what configuration I change I still get the same 6 secs avg response time / per endpoint when my tests hit 2(two)k users and various 502 / 504 errors.
WebService:
CloudFlare <--> Nginx <--> Gunicorn <--> Django/DRF <--> Memcache <---> Postgres
Here's what I tried:
Increase gunicorn workers from 4 to 10
Increase service(pod) instances from 3 to 10
Increase gunicorn worker timeout to 120
Increase Nginx proxy_pass timeout to 120
Most endpoints hit the database once every 100 seconds and the other requests get the data from memcache.
Could any one help by pointing out what kind of configuration should I be changing?
Where should I be looking for delays/bottlenecks?
Gunicorn workers clearly are timming out, which I dont undersdand since theres no logic in the WS views. It should be only getting a query from memcache and returning it.
Nginx logs:
latforms/android HTTP/1.1", upstream: "http://10.0.1.17:9090/endpoints/platforms/android", host: "myhost.co"
2018/08/13 23:43:25 [error] 8893#8893: *2809163 upstream timed out (110: Connection timed out) while connecting to upstream, client: 200.211.198.133, server: myhost.co, request: "GET /endpoints/store/products/729 HTTP/1.1", upstream: "http://10.0.1.18:9090/endpoints/store/products/729", host: "myhost.co"
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/categories/?cat_pk=13081 HTTP/1.1" 200 1718 "-" "python-requests/2.18.4" 627 80.840 [production-service-api-80] 10.0.0.112:9090, 10.0.1.13:9090, 10.0.0.113:9090 0, 0, 11150 40.000, 40.000, 0.840 504, 504, 200
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/categories/?cat_pk=13081 HTTP/1.1" 200 1718 "-" "python-requests/2.18.4" 689 80.857 [production-service-api-80] 10.0.0.112:9090, 10.0.1.12:9090, 10.0.0.113:9090 0, 0, 11150 40.000, 40.000, 0.857 504, 504, 200
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/home/ HTTP/1.1" 200 10072 "-" "python-requests/2.18.4" 670 80.580 [production-service-api-80] 10.0.1.13:9090, 10.0.1.11:9090, 10.0.0.112:9090 0, 0, 66511 40.001, 40.002, 0.577 504, 504, 200
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/products/691/ HTTP/1.1" 200 703 "-" "python-requests/2.18.4" 646 80.486 [production-service-api-80] 10.0.1.8:9090, 10.0.1.13:9090, 10.0.1.12:9090 0, 0, 1968 40.000, 40.000, 0.486 504, 504, 200
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/products/5458 HTTP/1.1" 301 0 "-" "python-requests/2.18.4" 678 80.444 [production-service-api-80] 10.0.1.13:9090, 10.0.1.12:9090, 10.0.1.17:9090 0, 0, 0 40.000, 40.002, 0.442 504, 504, 301
....
90, 10.0.1.11:9090, 10.0.1.8:9090 0, 0, 1968 40.000, 40.000, 0.584 504, 504, 200
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/products/5458/ HTTP/1.1" 200 241 "-" "python-requests/2.18.4" 647 80.709 [production-service-api-80] 10.0.0.113:9090, 10.0.1.8:9090, 10.0.0.112:9090 0, 0, 327 40.001, 40.000, 0.708 504, 504, 200
--
2018/08/13 23:43:25 [error] 8766#8766: *2809243 upstream timed out (110: Connection timed out) while connecting to upstream, client: 200.211.198.133, server: myhost.co, request: "GET /endpoints/store/categories/?cat_pk=13081 HTTP/1.1", upstream: "http://10.0.1.13:9090/endpoints/store/categories/?cat_pk=13081", host: "myhost.co"
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/products/692 HTTP/1.1" 301 0 "-" "python-requests/2.18.4" 677 80.672 [production-service-api-80] 10.0.1.17:9090, 10.0.1.10:9090, 10.0.0.113:9090 0, 0, 0 40.001, 40.001, 0.670 504, 504, 301
200.211.198.133 - [200.211.198.133] - - [13/Aug/2018:23:43:25 +0000] "GET /endpoints/store/products/4608/ HTTP/1.1" 200 553 "-" "python-requests/2.18.4" 647 80.591 [production-service-api-80] 10.0.1.11:9090, 10.0.1.17:9090, 10.0.1.8:9090 0, 0, 1090 40.000, 40.003, 0.588 504, 504, 200
Gunicorn logs:
{"asctime": "2018-08-13 23:42:55,145", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:55 +0000] \"GET /endpoints/store/products/691/ HTTP/1.1\" 200 1968 \"-\" \"python-requests/2.18.4\""}
{"asctime": "2018-08-13 23:42:55,167", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:55 +0000] \"GET /endpoints/store/products/729 HTTP/1.1\" 301 - \"-\" \"python-requests/2.18.4\""}
[2018-08-13 23:42:55 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:36)
[2018-08-13 23:42:55 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:37)
[2018-08-13 23:42:55 +0000] [382] [INFO] Booting worker with pid: 382
[2018-08-13 23:42:55 +0000] [383] [INFO] Booting worker with pid: 383
{"asctime": "2018-08-13 23:42:55,403", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:55 +0000] \"GET /endpoints/store/products/691/ HTTP/1.1\" 200 1968 \"-\" \"python-requests/2.18.4\""}
....
{"asctime": "2018-08-13 23:42:55,184", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:55 +0000] \"GET /endpoints/store/categories/?cat_pk=13081 HTTP/1.1\" 200 11150 \"-\" \"python-requests/2.18.4\""}
{"asctime": "2018-08-13 23:42:55,262", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:55 +0000] \"GET /endpoints/platforms/android HTTP/1.1\" 200 48 \"-\" \"python-requests/2.18.4\""}
{"asctime": "2018-08-13 23:42:55,439", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:55 +0000] \"GET /endpoints/platforms/android HTTP/1.1\" 200 48 \"-\" \"python-requests/2.18.4\""}
--
[2018-08-13 23:42:56 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:31)
{"asctime": "2018-08-13 23:42:56,689", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:56 +0000] \"GET /endpoints/store/products/729/ HTTP/1.1\" 200 2163 \"-\" \"python-requests/2.18.4\""}
{"asctime": "2018-08-13 23:42:56,799", "name": "gunicorn.access", "levelname": "INFO", "message": "10.0.0.13 - - [13/Aug/2018:23:42:56 +0000] \"GET /endpoints/store/products/5458/ HTTP/1.1\" 200 327 \"-\" \"python-requests/2.18.4\""}

why you not used uwsgi?
for better working do this
decrease database hit in your codes
increase worker count for gunicorn
diable info logging for gunicorn and nginx
if these configuration not worked for you you must change setup configuration or increase resource of your server.

Related

Regex in fail2ban not matching

Should be a simple thing, but with regex nothing is simple.
My fail2ban filter for wordpress sites:
[Definition]
#failregex = <HOST>.*POST.*(wp-login\.php|xmlrpc\.php).* 200
#failregex = <HOST>.*POST.*(wp-login\.php|xmlrpc\.php).* 200[ 0-9]*
failregex = ^"<HOST> .* "POST .*wp-login.php
#failregex = <HOST>.*POST.*wp-login.php .*
#failregex = ^"<HOST> .* "POST .*(wp-login.php|xmlrpc.php) HTTP/.*" (200|401)
ignoreregex =
As you can see I have tested multiple things, but I just don't get a match. Odly I do get a match on regex101.
And this is my logfile (those entires should be found):
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:21 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
The logfile could also contain entries like this:
"hostname 172.69.63.84 - - [19/May/2021:09:23:01 +0000] "GET /feed/ HTTP/1.1" 200 14872"
"hostname 172.69.63.84 - - [19/May/2021:09:23:00 +0000] "GET /feed HTTP/1.1" 301 0"
"hostname 162.158.91.10 - - [19/May/2021:09:23:01 +0000] "POST /wp-cron.php?doing_wp_cron=1621416181.1017169952392578125000 HTTP/1.1" 200 0"
"hostname 172.68.57.138 - - [19/May/2021:09:22:34 +0000] "GET /versand/ HTTP/1.1" 200 27456"
"hostname 172.68.110.69 - - [19/May/2021:09:22:34 +0000] "POST /wp-cron.php?doing_wp_cron=1621416154.5001699924468994140625 HTTP/1.1" 200 0"
"hostname 172.69.34.217 - - [19/May/2021:09:19:48 +0000] "GET / HTTP/1.1" 200 32986"
And I have tested with fail2ban-regex, but with no success. I have also tried to replace < HOST > with the actual hostname, but in this case fail2ban will not accept the regex.
Running tests
=============
Use failregex filter file : wordpress, basedir: /etc/fail2ban
Use log file : /home/runcloud/logs/tmp.log
Use encoding : UTF-8
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [3] Day(?P<_sep>[-/])MON(?P=_sep)ExYear[ :]?24hour:Minute:Second(?:\.Microseconds)?(?: Zone offset)?
`-
Lines: 3 lines, 0 ignored, 0 matched, 3 missed
this regex match (in this example the first 3 lines)
"POST request on either wp-login.php or xmlrp.php" as rapsli wanted
"POST\b.+\b(wp-login|xmlrp)\.php
in
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:21 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.69.63.84 - - [19/May/2021:09:23:01 +0000] "GET /feed/ HTTP/1.1" 200 14872"
"hostname 172.69.63.84 - - [19/May/2021:09:23:00 +0000] "GET /feed HTTP/1.1" 301 0"
"hostname 162.158.91.10 - - [19/May/2021:09:23:01 +0000] "POST /wp-cron.php?doing_wp_cron=1621416181.1017169952392578125000 HTTP/1.1" 200 0"
"hostname 172.68.57.138 - - [19/May/2021:09:22:34 +0000] "GET /versand/ HTTP/1.1" 200 27456"
"hostname 172.68.110.69 - - [19/May/2021:09:22:34 +0000] "POST /wp-cron.php?doing_wp_cron=1621416154.5001699924468994140625 HTTP/1.1" 200 0"
"hostname 172.69.34.217 - - [19/May/2021:09:19:48 +0000] "GET / HTTP/1.1" 200 32986"
https://regexr.com/5t8e3
needs to stand for the place with the IP. So this regex should work with fail2ban
failregex = "[a-z]* <HOST>.*(wp-login\.php|xmlrpc.php).*

How i filter fluentD logs on kubernetes?

My kubernetes have liveness enable, and it log on application, like this:
kubectl logs -n example-namespace example-app node-app
::ffff:127.0.0.1 - - [17/Sep/2020:14:12:19 +0000] "GET /docs HTTP/1.1" 301 175
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /docs/ HTTP/1.1" 200 3104
::192.168.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /home-page HTTP/1.1" 200 3104
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /docs HTTP/1.1" 301 175
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:22 +0000] "GET /docs/ HTTP/1.1" 200 3104
I Use fluentD to send logs to ClowdWatch.
My fluentD configuration:
https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml
How can i filter, to fluentD only matches
::192.168.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /home-page HTTP/1.1" 200 3104
And ignore
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /docs HTTP/1.1" 301 175
Thanks!
After some research, i found this solution:
<match kubernetes.var.log.containers.**_kube-system_**>
#type null
</match>
and this
<filter **>
#type grep
exclude1 log docs
</filter>
The reference:
https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/issues/91
https://docs.fluentd.org/filter/grep
EDIT
or add:
exclude_path ["/var/log/containers/cloudwatch-agent*", "/var/log/containers/fluentd*", "/var/log/containers/*istio*"]
this config ignore the source files with pattern istio.

What is causing this IcecastV2 "Bad or missing password on admin command request" warning?

I'm running IcecastV2, and although everything appears to be working - the log file shows this message.
INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
I cannot find a reason that could be causing this?
Edit: Icecast V2.4.4 compiled on Mac.
EDIT:
This is from the error.log file, Followed matched by same time period from access.log
[2019-01-01 13:08:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:08:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:08:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:08:44] INFO format-vorbis/initial_vorbis_page seen initial vorbis header
[2019-01-01 13:08:44] INFO admin/admin_handle_request Received admin command metadata on mount "/live.aac"
[2019-01-01 13:08:44] INFO admin/command_metadata Metadata on mountpoint /live.aac changed to "Kostas Pavlidis - Fake Life"
[2019-01-01 13:08:44] INFO admin/admin_handle_request Received admin command metadata on mount "/live.mp3"
[2019-01-01 13:08:44] INFO admin/command_metadata Metadata on mountpoint /live.mp3 changed to "Kostas Pavlidis - Fake Life"
[2019-01-01 13:09:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:09:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:09:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:10:32] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:10:32] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:10:32] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
access.log
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4439 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4439 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4439 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:44 +0000] "GET /admin/metadata HTTP/1.0" 200 396 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:44 +0000] "GET /admin/metadata HTTP/1.0" 200 396 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 1
192.168.0.7 - - [01/Jan/2019:13:09:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 1
192.168.0.7 - - [01/Jan/2019:13:09:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 1
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:11:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:11:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
From the combined logs it's pretty obvious what's happening here:
You are running at least 3 source clients into mountpoints
The source client each time seems to be sambc/2018.10 (Possibly SAM Broadcaster?)
These source clients are making stream metadata update requests via /admin/metadata
For Ogg that's actually a bug and likely the metadata is broken for listeners.
It should embed the metadata inside the stream that it sends to the server instead!
These source clients are each polling statistics via /admin/stats.xml
For some reason the source client doesn't cache the fact that authentication is necessary for this URL and follows the 'from zero' HTTP request procedure every time where
first it doesn't send credentials
It gets refused with a HTTP 401 status
It sends the same request again, but including credentials
Summarizing: The behaviour you are concerned about is perfectly within what's defined by the HTTP standards. Icecast is just a bit wordy on that particular event.

ElK stack AWS S3 log grok pattern

Can someone help me creating a grook pattern for this kind of log:
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be mybucket [06/Feb/2014:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 3E57427F3EXAMPLE REST.GET.VERSIONING - "GET /mybucket?versioning HTTP/1.1" 200 - 113 - 7 - "-" "S3Console/0.4" -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be mybucket [06/Feb/2014:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 891CE47D2EXAMPLE REST.GET.LOGGING_STATUS - "GET /mybucket?logging HTTP/1.1" 200 - 242 - 11 - "-" "S3Console/0.4" -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be mybucket [06/Feb/2014:00:00:38 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be A1206F460EXAMPLE REST.GET.BUCKETPOLICY - "GET /mybucket?policy HTTP/1.1" 404 NoSuchBucketPolicy 297 - 38 - "-" "S3Console/0.4" -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be mybucket [06/Feb/2014:00:01:00 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be 7B4A0FABBEXAMPLE REST.GET.VERSIONING - "GET /mybucket?versioning HTTP/1.1" 200 - 113 - 33 - "-" "S3Console/0.4" -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be mybucket [06/Feb/2014:00:01:57 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be DD6CC733AEXAMPLE REST.PUT.OBJECT s3-dg.pdf "PUT /mybucket/s3-dg.pdf HTTP/1.1" 200 - - 4406583 41754 28 "-" "S3Console/0.4" -
79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be mybucket [06/Feb/2014:00:03:21 +0000] 192.0.2.3 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be BC3C074D0EXAMPLE REST.GET.VERSIONING - "GET /mybucket?versioning HTTP/1.1" 200 - 113 - 28 - "-" "S3Console/0.4" -
I have to analize them but I don't really know the way to create a grook filter for those logs, and also get #timestamp from them, thanks a lot!
This grok debugger tool is also useful: http://grokdebug.herokuapp.com/
Use the grok pattern show below.
%{WORD:Bucket_Owner} %{WORD:bucket_name} %{DATA:timestamp} %{IP:Remote_IP} %{WORD:Requester} %{WORD:Request} %{DATA:Rest} %{WORD:HTTP_Status} \- %{NUMBER:Bytes_Sent} \- %{NUMBER:Object_Size} \- \"-" \"%{DATA:S3_console}" \-

AWS elastic beanstalk cannot return custom response code using resteasy?

I'm working on a web service using RESTEASY to set the response status code when get some exception.
First I tried resteasy exception mapper which works fine locally. The mapper code attached below. However, when I upload that WS into elastic beanstalk, that always return 500 (internal server error).
#Provider
public class LoadGridTileFailedExceptionMapper extends BaseExceptionMapper implements ExceptionMapper<LoadGridTileFailedException>
{
#Override
public Response toResponse(LoadGridTileFailedException e)
{
log(e.getMessage(), e);
return printMsg(e.getMessage(), DtmWebServiceReturnStatus.LOAD_GRID_TILE_FAILED_EXCEPTION_CODE);
}
}
Then I try just throw exception WebApplicationException(ex, DtmWebServiceReturnStatus.LOAD_GRID_TILE_FAILED_EXCEPTION_CODE) to get around exception mapping. The result is that I got a response status 498(LOAD_GRID_TILE_FAILED_EXCEPTION_CODE) wrapped in status code 500.
Apache Tomcat/7.0.27 - Error report HTTP Status 498 - type Status reportmessage description http.498Apache Tomcat/7.0.27
It seems that elastic beanstalk wrapped all exceptions throw out in the server side with status code 500?The question is how can I get around that feature and return the status code I set in response? Thank you.
UPDATE
Try more requests this morning and find something interesting:
Get the right return status in elastic beanstalk log snapshot
/var/log/tomcat7/localhost_access_log.txt
127.0.0.1 - - [09/Jan/2013:15:06:28 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:31 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:34 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:37 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:39 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:41 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:44 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:48 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:51 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:54 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
127.0.0.1 - - [09/Jan/2013:15:06:57 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22
/var/log/httpd/elasticbeanstalk-access_log
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:28 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:31 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:34 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:37 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:39 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:41 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:44 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:48 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:51 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:54 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
10.28.215.233 (65.167.11.254, 10.28.215.233) - - [09/Jan/2013:15:06:57 +0000] "GET /published/tile/003331330031 HTTP/1.1" 498 22 "-" "-"
However in client side, still got 500 :-(
printMsg method:
protected Response printMsg(String msg, int intStatus)
{
// Need this due to the Resteasy bug
ServiceDataCollector.processRequest(true);
ResponseBuilder builder = Response.status(intStatus);
builder.type("text/plain");
builder.entity("ERROR: " + msg);
Response rep = builder.build();
LOG.error(rep.getStatus() + ":" + rep.toString());
return rep;
}
Some one help me to work the problem out. I had the httpd deployed in my AMI before tomcat server at 80. So the load balancer will interact with httpd server, which change the status code from tomcat to 500. Disable that httpd server will solve the problem. Thx for everyone's help.