FastAPI is running on EC2.
The service is published on 0.0.0.0/0 with a single Port number.
There are multiple accesses with directory names unrelated to its own service.
What should I do in such a case?
Is this a common occurrence and is it something I should be concerned about?
This type of traffic is perfectly normal on the Internet.
In fact, if you were to look at the logs on the network router in your home (which connects you to the Internet), you will see hundreds of such attempts every day.
These requests are coming from automated scripts ('bots') running on the Internet. They attempt to take advantage of known security vulnerabilities to gain access to your systems. This is why it is generally a good idea to keep software up-to-date and to limit the number of ports that are opened to the Internet.
WordPress sites are often targets of bots since people do not keep them updated. You will often see requests in your logs that are trying WordPress vulnerabilities, even though you are not running WordPress. The bots just try everything, everywhere!
For a web server, you need to open ports 80 (HTTP) and 443 (HTTPS), but any other ports should be kept closed, or perhaps only opened to a specific range of IP addresses (eg for your home/office).
What should you do?
Only open ports that are strictly necessary, and limit the IP address range if possible
Keep software updated
Live with it -- it's a fact of life on the Internet
Related
I am trying to complete AWS' Build A Modern Web Application at https://aws.amazon.com/getting-started/hands-on/build-modern-app-fargate-lambda-dynamodb-python/ and am having trouble completing part C (test the service) of Step 4 in Module 2B of the tutorial. Specially, I cannot open the link http://mysfits-nlb-123456789-abc123456.elb.us-east-1.amazonaws.com/mysfits (when I replace the nlb link with mine, that is--I know the exact link there is just a placeholder) in the Cloud9 IDE preview (Unable to load http preview) or on my browser on my laptop. Just to confirm it is not an issue with my specific network, I tried opening the link on my phone after cutting my phone off the specific network (i.e., killing wifi) and just as expected, that did not work as well.
I have followed the instructions in the tutorial but I think there is no way this NLB allows access from the Internet, but I do not know how to fix this as my NLB is internet-facing and my target group containers have not failed healthchecks. I checked the VPC that the NLB belongs in, and it allows all traffic inbound from 0.0.0.0/0 and outbound to 0.0.0.0/0, the NLB's DNS is resolvable (nslookup or ping is able to resolve the DNS to the public IPv4 of my NLB, but both ping and traceroute from my laptop fail). In addition, the security group that pertains to the target group also allow all access from the Internet.
I also ran wireshark on my local machine to understand how (if at all) the NLB is responding (i.e., checking if I am getting a TCP RST for the SYNs from my laptop. I don't see any TCP SYN/ACK coming back from the NLB, nor do I even see a TCP RST, suggesting that the NLB, despite appearances (scheme is internet-facing), is not publicly accessible.
Any ideas what might be going on?
Lastly, why is this tutorial asking to front this service with an NLB instead of an ALB? I know this shouldn't matter, but it can be confusing.
I have an open port for a server I am hosting, and I get lots of spurious calls to "/ws/v1/cluster/apps/new-application" which seems to be for some Hadoop botnet (all it does is pollute my logs with lots of invalid URL errors). How do I block calls to this URL? I could change my port to a less common one but I would prefer not to.
The only way to "block" such requests from reaching your server would be to launch an AWS Web Application Firewall (AWS WAF) and configure appropriate rules.
AWS WAF only works in conjunction with Amazon CloudFront or an Elastic Load Balancer, so the extra effort (and expense) might not be worth the benefit of simply avoiding some lines in a log file.
One day I took a look at my home router's logs and I was utterly amazed to see the huge amount of bot attempts to gain access to random systems. You should be thankful if this is the only one getting through to your server!
I am having an Application that is running on Windows 10 and the server is hosted on AWS. So for this application we have to White-list ip's on SMTP port(25) for test mail . So the issue is till now we are doing the white-listing in the Security groups(firewall provided by AWS)and Now we have reached the Limit of "250 ip's" by attaching "5" security groups(Per Security group 50 ip's) and we cannot exceed the limit after that. So is there any other process were i can white-List ip's on SMTP port 25 for talking(test mail) to the Application.
Much Needed Help!!!
Thanks in Advance!!!!
Okay so based on the comment clarifications, I'm not sure that IP whitelisting is such a good idea. Theoretically you could skip using Security Groups and have PowerShell interact with Windows Firewall instead (EC2 Systems Manager Run Command can be utilized to automate this).
However, with the number of clients (1500 cited) and the potential for growth an IP whitelisting solution would at some point cause a noticeable hit on network performance (one good reason for security group limitations) as the firewall would be forced to check the packet against all conditions. Instead of this solution I'd be on the side of recommending you consider an authorization scheme based on tokens/headers/etc. This turns authentication into a more on-demand type situation and reduce the strain on network performance.
I have a single Windows Amazon EC2 instance and one public IP. The instance is running multiple web server EXEs which all sit on port 80. I want to have different domain names which I want to point to each server. On my old dedicated server I achieved this simply by having different public IPs, but with Amazon EC2 I want to keep to just one public IP.
I am not using IIS, Apache, etc. otherwise life would be a lot simpler (I would simply bind hostnames accordingly). The web server executables perform unusual "utility" tasks as part of a range of other websites, but still need to be hosted on port 80. There is no configuration other than address to bind to and port #.
I have setup several private IPs and bound each server application to those private IPs. Is it possible to leverage some of the Amazon networking products to direct the traffic to the correct private IP? e.g. I have tried setting up a private-DNS using Amazon Route53, and internally at least this seems to point to the correct servers - but not (perhaps logically) when I try to access the site externally.
In absence of any other solutions I decided to solve this using the blunt hammer approach and use a reverse proxy. Downside is my servers now only see the user IPs as 127.0.0.1 which was less than ideal, but better than nothing at all.
For my reverse proxy I used Redbird (uses node.js) but Nginx may also be an option. Both are free / open source.
I am preparing a system of EC2 workers on AWS that use Firebase as a queue of tasks they should work on.
My app in node.js that reads the queue and works on tasks is done and working and I would like to properly setup a firewall (EC2 Security Group) that allows my machines to connect only to my Firebase.
Each rule of that Security Group contains:
protocol
port range
and destination (IP address with mask, so it supports whole subnets)
My question is - how can I setup this rule for Firebase? I suppose that IP address of my Firebase is dynamic (it resolves to different IPs from different instances). Is there a list of possible addresses or how would you address this issue? Can some kind of proxy be a solution that would not slow down my Firebase drastically?
Since using node to interact with Firebase is outbound traffic, the default security group should work fine (you don't need to allow any inbound traffic).
If you want to lock it down further for whatever reason, it's a bit tricky. As you noticed, there are a bunch of IP addresses serving Firebase. You could get a list of them all with "dig -t A firebaseio.com" and add all of them to your firebase rules. That would work for today, but there could be new servers added next week and you'd be broken. To try to be a bit more general, you could perhaps allow all of 75.126.., but that is probably overly permissive and could still break if new Firebase servers were added in a different data center or something.
FWIW, I wouldn't worry about it. Blocking inbound traffic is generally much more important than outbound (since to generate outbound traffic you have to have already managed to somehow run software on the box)