What is causing this IcecastV2 "Bad or missing password on admin command request" warning? - icecast

I'm running IcecastV2, and although everything appears to be working - the log file shows this message.
INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
I cannot find a reason that could be causing this?
Edit: Icecast V2.4.4 compiled on Mac.
EDIT:
This is from the error.log file, Followed matched by same time period from access.log
[2019-01-01 13:08:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:08:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:08:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:08:44] INFO format-vorbis/initial_vorbis_page seen initial vorbis header
[2019-01-01 13:08:44] INFO admin/admin_handle_request Received admin command metadata on mount "/live.aac"
[2019-01-01 13:08:44] INFO admin/command_metadata Metadata on mountpoint /live.aac changed to "Kostas Pavlidis - Fake Life"
[2019-01-01 13:08:44] INFO admin/admin_handle_request Received admin command metadata on mount "/live.mp3"
[2019-01-01 13:08:44] INFO admin/command_metadata Metadata on mountpoint /live.mp3 changed to "Kostas Pavlidis - Fake Life"
[2019-01-01 13:09:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:09:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:09:31] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:10:32] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:10:32] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
[2019-01-01 13:10:32] INFO admin/admin_handle_request Bad or missing password on admin command request (command: stats.xml)
access.log
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4439 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4439 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:31 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4439 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:44 +0000] "GET /admin/metadata HTTP/1.0" 200 396 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:08:44 +0000] "GET /admin/metadata HTTP/1.0" 200 396 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:31 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:09:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 1
192.168.0.7 - - [01/Jan/2019:13:09:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 1
192.168.0.7 - - [01/Jan/2019:13:09:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 1
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:10:32 +0000] "GET /admin/stats.xml HTTP/1.0" 200 4415 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:11:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0
192.168.0.7 - - [01/Jan/2019:13:11:32 +0000] "GET /admin/stats.xml HTTP/1.1" 401 360 "-" "sambc/2018.10" 0

From the combined logs it's pretty obvious what's happening here:
You are running at least 3 source clients into mountpoints
The source client each time seems to be sambc/2018.10 (Possibly SAM Broadcaster?)
These source clients are making stream metadata update requests via /admin/metadata
For Ogg that's actually a bug and likely the metadata is broken for listeners.
It should embed the metadata inside the stream that it sends to the server instead!
These source clients are each polling statistics via /admin/stats.xml
For some reason the source client doesn't cache the fact that authentication is necessary for this URL and follows the 'from zero' HTTP request procedure every time where
first it doesn't send credentials
It gets refused with a HTTP 401 status
It sends the same request again, but including credentials
Summarizing: The behaviour you are concerned about is perfectly within what's defined by the HTTP standards. Icecast is just a bit wordy on that particular event.

Related

Error 4xx AWS Elastic Beanstalk - Severe integrity

Good afternoon people,
I created an environment in Elastic Beanstalk and uploaded a NODEjs application an api with express.
She's working fine, all right.
But the integrity of the environment is reported as serious, and this monitoring attempt appears in the logs.
----------------------------------------
/var/log/nginx/access.log
----------------------------------------
172.31.46.198 - - [03/Nov/2021:19:14:13 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.1.181 - - [03/Nov/2021:19:14:13 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.30.127 - - [03/Nov/2021:19:14:13 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.46.198 - - [03/Nov/2021:19:14:28 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.1.181 - - [03/Nov/2021:19:14:28 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.30.127 - - [03/Nov/2021:19:14:28 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.46.198 - - [03/Nov/2021:19:14:43 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.30.127 - - [03/Nov/2021:19:14:43 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.1.181 - - [03/Nov/2021:19:14:43 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.30.127 - - [03/Nov/2021:19:14:58 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.1.181 - - [03/Nov/2021:19:14:58 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.46.198 - - [03/Nov/2021:19:14:58 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
172.31.30.127 - - [03/Nov/2021:19:15:13 +0000] "GET / HTTP/1.1" 404 139 "-" "ELB-HealthChecker/2.0" "-"
Does anyone know how I can fix this, without turning off the monitoring?
Good night people,
I found the problem, I didn't have anything set in my API's root on "/", so EB tried to monitor the api state and took a 404.
I set up a HealthCheck on the root "/" and normalized the 404 errors and integrity issue in the environment.

Regex in fail2ban not matching

Should be a simple thing, but with regex nothing is simple.
My fail2ban filter for wordpress sites:
[Definition]
#failregex = <HOST>.*POST.*(wp-login\.php|xmlrpc\.php).* 200
#failregex = <HOST>.*POST.*(wp-login\.php|xmlrpc\.php).* 200[ 0-9]*
failregex = ^"<HOST> .* "POST .*wp-login.php
#failregex = <HOST>.*POST.*wp-login.php .*
#failregex = ^"<HOST> .* "POST .*(wp-login.php|xmlrpc.php) HTTP/.*" (200|401)
ignoreregex =
As you can see I have tested multiple things, but I just don't get a match. Odly I do get a match on regex101.
And this is my logfile (those entires should be found):
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:21 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
The logfile could also contain entries like this:
"hostname 172.69.63.84 - - [19/May/2021:09:23:01 +0000] "GET /feed/ HTTP/1.1" 200 14872"
"hostname 172.69.63.84 - - [19/May/2021:09:23:00 +0000] "GET /feed HTTP/1.1" 301 0"
"hostname 162.158.91.10 - - [19/May/2021:09:23:01 +0000] "POST /wp-cron.php?doing_wp_cron=1621416181.1017169952392578125000 HTTP/1.1" 200 0"
"hostname 172.68.57.138 - - [19/May/2021:09:22:34 +0000] "GET /versand/ HTTP/1.1" 200 27456"
"hostname 172.68.110.69 - - [19/May/2021:09:22:34 +0000] "POST /wp-cron.php?doing_wp_cron=1621416154.5001699924468994140625 HTTP/1.1" 200 0"
"hostname 172.69.34.217 - - [19/May/2021:09:19:48 +0000] "GET / HTTP/1.1" 200 32986"
And I have tested with fail2ban-regex, but with no success. I have also tried to replace < HOST > with the actual hostname, but in this case fail2ban will not accept the regex.
Running tests
=============
Use failregex filter file : wordpress, basedir: /etc/fail2ban
Use log file : /home/runcloud/logs/tmp.log
Use encoding : UTF-8
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [3] Day(?P<_sep>[-/])MON(?P=_sep)ExYear[ :]?24hour:Minute:Second(?:\.Microseconds)?(?: Zone offset)?
`-
Lines: 3 lines, 0 ignored, 0 matched, 3 missed
this regex match (in this example the first 3 lines)
"POST request on either wp-login.php or xmlrp.php" as rapsli wanted
"POST\b.+\b(wp-login|xmlrp)\.php
in
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:22 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.70.34.43 - - [18/May/2021:05:58:21 +0000] "POST //wp-login.php HTTP/1.1" 200 3069"
"hostname 172.69.63.84 - - [19/May/2021:09:23:01 +0000] "GET /feed/ HTTP/1.1" 200 14872"
"hostname 172.69.63.84 - - [19/May/2021:09:23:00 +0000] "GET /feed HTTP/1.1" 301 0"
"hostname 162.158.91.10 - - [19/May/2021:09:23:01 +0000] "POST /wp-cron.php?doing_wp_cron=1621416181.1017169952392578125000 HTTP/1.1" 200 0"
"hostname 172.68.57.138 - - [19/May/2021:09:22:34 +0000] "GET /versand/ HTTP/1.1" 200 27456"
"hostname 172.68.110.69 - - [19/May/2021:09:22:34 +0000] "POST /wp-cron.php?doing_wp_cron=1621416154.5001699924468994140625 HTTP/1.1" 200 0"
"hostname 172.69.34.217 - - [19/May/2021:09:19:48 +0000] "GET / HTTP/1.1" 200 32986"
https://regexr.com/5t8e3
needs to stand for the place with the IP. So this regex should work with fail2ban
failregex = "[a-z]* <HOST>.*(wp-login\.php|xmlrpc.php).*

How i filter fluentD logs on kubernetes?

My kubernetes have liveness enable, and it log on application, like this:
kubectl logs -n example-namespace example-app node-app
::ffff:127.0.0.1 - - [17/Sep/2020:14:12:19 +0000] "GET /docs HTTP/1.1" 301 175
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /docs/ HTTP/1.1" 200 3104
::192.168.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /home-page HTTP/1.1" 200 3104
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /docs HTTP/1.1" 301 175
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:22 +0000] "GET /docs/ HTTP/1.1" 200 3104
I Use fluentD to send logs to ClowdWatch.
My fluentD configuration:
https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml
How can i filter, to fluentD only matches
::192.168.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /home-page HTTP/1.1" 200 3104
And ignore
::ffff:127.0.0.1 - - [17/Sep/2020:14:13:19 +0000] "GET /docs HTTP/1.1" 301 175
Thanks!
After some research, i found this solution:
<match kubernetes.var.log.containers.**_kube-system_**>
#type null
</match>
and this
<filter **>
#type grep
exclude1 log docs
</filter>
The reference:
https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/issues/91
https://docs.fluentd.org/filter/grep
EDIT
or add:
exclude_path ["/var/log/containers/cloudwatch-agent*", "/var/log/containers/fluentd*", "/var/log/containers/*istio*"]
this config ignore the source files with pattern istio.

Suddenly I am getting 502 on an End point, rest are working fine

I am serving django project with gunicorn, It running fine but after some time on one specific endpoint start giving 502. Other api end point still okay and giving proper response.
I already tried with gunicorn service settings
Current setting
ExecStart=/var/virtualenv/d/bin/gunicorn --workers 5 proj.wsgi:application -b :9008 --threads 8 -k gthread --timeout 120
47.247.243.245 - - [14/Jun/2019:15:02:11 +0000] "GET /api/user/1263/league/81/ HTTP/1.1" 200 291 "-" "okhttp/3.12.1"
157.32.16.35 - - [14/Jun/2019:15:02:11 +0000] "GET /api/v1/xyc-leaderboard/?contest=4973 HTTP/1.1" 200 43714 "-" "okhttp/3.12.1""
157.32.16.35 - - [14/Jun/2019:15:02:16 +0000] "GET /api/v1/xy/xy-team/15543 HTTP/1.1" 301 5 "-" "okhttp/3.12.1"
157.32.16.35 - - [14/Jun/2019:15:02:16 +0000] "GET /api/v1/xy/xy-team/15543/ HTTP/1.1" 502 5517 "-" "okhttp/3.12.1"
171.76.167.124 - - [14/Jun/2019:14:39:56 +0000] "GET /api/v1/xy/xy-team/15343/ HTTP/1.1" 502 182 "-" "okhttp/3.12.1"

AWS elasticbeanstalk worker received not allowed requests and stop to work. It need to be restart manually

I use AWS Elastic Beanstalk worker environment with SQS and cronjobs to do what I want.
But sometimes, my environment bug and stop to work (it needs to be restarted manually) because it received some unknown requests (not send by me of course) :
196.52.43.55 (-) - - [09/Jun/2017:00:33:11 +0000] "GET / HTTP/1.1" 400 226 "-" "-"
81.196.3.208 (-) - - [09/Jun/2017:01:45:30 +0000] "GET / HTTP/1.0" 200 4576 "-" "-"
195.154.214.162 (-) - - [09/Jun/2017:03:43:21 +0000] "GET //recordings/modules/phonefeatures.module HTTP/1.1" 404 471 "-" "python-requests/2.6.0 CPython/2.6.6 Linux/2.6.32-696.1.1.el6.x86_64"
195.154.214.162 (-) - - [09/Jun/2017:04:54:27 +0000] "GET //recordings/modules/phonefeatures.module HTTP/1.1" 404 471 "-" "python-requests/2.6.0 CPython/2.6.6 Linux/2.6.32-696.1.1.el6.x86_64"
Example of cron job I executed every minute
127.0.0.1 (-) - - [09/Jun/2017:00:14:59 +0000] "POST /workers/cron/search/detailsHTTP/1.1" 200 - "-" "aws-sqsd/2.3"
127.0.0.1 (-) - - [09/Jun/2017:00:14:59 +0000] "POST /workers/cron/positions HTTP/1.1" 200 60 "-" "aws-sqsd/2.3"
127.0.0.1 (-) - - [09/Jun/2017:00:15:01 +0000] "POST /queue/received HTTP/1.1" 200 10 "-" "aws-sqsd/2.3"
Do you have a solution for me? Do I need to change my VPC and/or EC2 group security?
My architecture is one Elasticsearch Application and one Elasticsearch Worker.
Thank you very much