I am working on aws waf to block the bad bots. I am using loggly service as well to check the access logs of the server and bad requests.
But I am not understanding about the following log, what this log is for and is this bad?
Please check this image
Logs is showing some internal ip as host and accessing unknown url. This is somethign strange for me.
This is a malware user-agent ; you can consider it as bad-bot. You cannot avoid such things.. There are bots which keep scanning ports etc. As per one of the estimates 25-50% of overall internet traffic(not including videos and images) is bots.
Just ignore these log messages.
Related
Regularly I receive from my server these kind of errors:
Invalid HTTP_HOST header: '139.162.113.11'. You may need to add '139.162.113.11' to ALLOWED_HOSTS.
The problem is that my server works fine and I don't know where do these IP addresses are coming from.
If I try to localize the one in example, it appears to be in Tokyo, which is weird to me, having a server based in France for mainly european customers.
Can't it be a suspicious attempt to the server security? I'm not keen to allow this IP. What is the correct attitude toward this kind of error?
You can "trust" that Django is helping prevent your app from running on disallowed hosts!
However- you can't blindly trust that these IPs should be allowed to host your application. They're typically some kind of bot scanning services poking around for vulnerabilities in servers to do nasty things.
Heck- I have a few of these DISALLOWED_HOST warnings in my inbox this morning as I wake up!
There is a logging option django.security.DisallowedHost where you can quiet this issue- however I keep it on as a barometer for bot activity.
I have a bunch of strange logs on Cloudwatch coming from ALB, looks like this.
2020-11-03T14:52:57.289+09:00 Not Found: /owa/auth/logon.aspx
2020-11-03T15:23:20.120+09:00 Not Found: /.env
2020-11-03T15:35:39.482+09:00 Not Found: /index.php
I use cloudwatch to logging server data, so this really bothers me. I would like to know how to block them.
Welcome to the Internet! There are many strange bots running on the Internet that are trying to access systems using known vulnerabilities. Any device connected to the Internet will regularly receive such requests. Take a look at the logs in your home router to see an example of what takes place.
You could add a Web Application Firewall (AWS WAF) to the Load Balancer, which can block defined patterns of requests. However, it might not be worth the effort/expense if your goal is merely to clean-up the log file.
We have had issues reported under heavy load that appear to indicate some requests waiting in CF's queue are being timed out and trying to get more info about this. The IIS log is not showing anything useful as far as I can tell. Is there standard log that would have these listed? If not, is there are place in CF or Tomcat config where logging can be enabled?
Here is one of many references to log files in ColdFusion. Of course you have to know where they are and have permission to see them.
You can find out where the log files are by logging in to your server's ColdFusion Administrator page and looking to see what the path for log files is.
Then you can either map a drive to the server, or remote in, navigate to the location, and look at the appropriate file. The errors might be in exception.log, application.log, or, looking at the screenshot in the link, coldfusion-error.log.
We have a streaming endpoint where data streams through our api.domain.com service to our backend.domain.com service and then as chunks are received in backend.domain.com, we write those chunks to the database. In this way, we can ndjson a request into our servers and IT IS FAST, VERY FAST.
We were very very disappointed to find out the cloud-run firewalls for http1.1 at least (via curl) do NOT support streaming!!!! curl is doing http2 to google cloud run firewall and google is by default hitting our servers with http1.1(for some reason though I saw an option to start in http2 mode that we have not tried).
What I mean, by they don't support streaming is that google does not send our servers a request UNTIL the whole request is received by them!!!(ie. not just headers, it needs to receive the entire body....this makes things very slow as opposed to streaming straight through firewall 1, cloud run service 1, firewall 2, cloud run service 2, database.
I am wondering if google's cloud run firewall by chance supports http/2 streaming and actually sends the request headers instead of waiting for the entire body.
I realize google has body size limits.......AND I realize we respond to clients with 200OK before the entire body is received (ie. we stream back while a request is being streamed in) sooooo, I am totally ok with google killing the connection if size limits are exceeded.
So my second question in this post is if they do support streaming, what will they do when size is exceeded since I will have already responded with 2000k at that point.
In this post, my definition of streaming is 'true streaming'. You can stream a request into a system and that system can forward it to the next system and keep reading/forwarding and reading/forwarding rather than waiting for the whole request. The google cloud run firewall is NOT MY definition of streaming since it does not pass through chunks it receives! Our servers sends data as it receives it so if there are many hops, there is no impact thanks to webpieces webserver.
Unfortunately, Cloud Run doesn't support HTTP/2 end-to-end to the serving instance.
Server-side streaming is in ALPHA. Not sure if it helps solving your problem. If it does, please fill out the following form to opt in, thanks!
https://docs.google.com/forms/d/e/1FAIpQLSfjwvwFYFFd2yqnV3m0zCe7ua_d6eWiB3WSvIVk50W0O9_mvQ/viewform
A resource on my webapp takes nearly a minute to load after a long stall. This happens consistently. As shown below, only 3 requests on this page actually hit the server itself, the rest hit the memory or disk cache. This problem only seems to occur on Chrome, both Safari and Firefox do not exhibit this behavior.
I have implemented the Cache-Control: no-store suggestion in this SO question but the problem persists. request stalled for a long time occasionally in chrome
Also included below is an example of what the response looks like once it finally does come in.
My app is hosted in AWS behind a Network Load Balancer which proxies to an EC2 instance running nginx and the app itself.
Any ideas what is causing this?
I encountered the exact same problem. We are using Elastic Beanstalk with Network Load Balancer (NLB) with TLS termination at NLB.
The feedback I got from AWS support is:
This problem can occur when a client connects to a TLS listener on a Network Load Balancer and does not send data immediately after completing the TLS handshake. The root cause is an edge case in the handling of new connections. Note that this only occurs if the Target Group for the TLS listener is configured to use the TCP protocol without Proxy Protocol v2 enabled
They are working on a fix for this issue now.
Somehow this problem can only be noticed when you are using Chrome browser.
In the meantime, you have these 2 options as workaround:
enable Proxy Protocol v2 on the Target Group OR
configure the Target Group to use TLS protocol for routing traffic to the targets
I know it's a late answer but I write it for someone seeking a solution.
TL;DR: In my case, enabling cross-zone load balancing attribute of NLB solved the problem.
With investigation using WireShark I figured out there were two different IPv4 addresses Chrome communicated with.
Sending packets to one of them always succeeded and to the other always failed.
Actually the two addresses delegated two Availability Zones.
By default, cross-zone load balancing is disabled if you choose NLB (on the contrary the same attribute of ALB is enabled by default).
Let's say there are two AZs; AZ-1 / AZ-2.
When you attach both AZs to a NLB, it has a node for each AZ.
The node belongs to AZ-1 just routes traffic to instances which also belong to AZ-1. AZ-2 instances are ignored.
My modest app (hosted on Fargate) has just one app server (ECS task) in AZ-2 so that the NLB node in AZ-1 cannot route traffic to anywhere.
I'm not familiar with TCP/IP or Browser implementation but in my understanding, your browser somehow selects the actual ip address after DNS lookup.
If the AZ-2 node is selected in the above case then everything goes fine, but if the AZ-1 is selected your browser starts stalling.
Maybe Chrome has a random strategy to select ip while Safari or Firefox has a sticky one, so that the problem only appears on Chrome.
After enabling cross-zone load balancing the ECS task on AZ-2 is visible from the AZ-1 NLB node, and it works fine with Chrome browser too.
(Please feel free to correct my poor English. Thank you!)
I see two things that could be responsible for delays:
1) Usage of CDNs
If the resources that load slow are loaded from CDNs (Content Delivery Networks) you should try to download them to the server and deliver directly.
Especially if you use http2 this can be a remarkable gain in speed, but also with http1. I've no experience with AWS, so I don't know how things are served there by default.
It's not shown clearly in your screenshot if the resources are loaded from CDN but as it's about scripts I think that's a reasonable assumption.
2) Chrome’s resource scheduler
General description: https://blog.chromium.org/2013/04/chrome-27-beta-speedier-web-and-new.html
It's possible or even probable that this scheduler has changed since the article was published but it's at least shown in your screenshot.
I think if you optimize the page with help of the https://www.webpagetest.org and the chrome web tools you can solve any problems with the scheduler but also other problems concerning speed and perhaps other issues too. Here is the link: https://developers.google.com/web/tools/
EDIT
3) Proxy-Issue
In general it's possible that chrome has either problems or reasons to delay because of the proxy-server. Details can't be known before locking at the log-files, perhaps you've to adjust that log-files are even produced and that the log-level is enough to tell you about any problems (Level Warning or even Info).
After monitoring the chrome net-export logs, it seems as though I was running into this issue: https://bugs.chromium.org/p/chromium/issues/detail?id=447463.
I still don't have a solution for how to fix the problem though.