How do I block calls to a specific endpoint in EC2? - amazon-web-services

I have an open port for a server I am hosting, and I get lots of spurious calls to "/ws/v1/cluster/apps/new-application" which seems to be for some Hadoop botnet (all it does is pollute my logs with lots of invalid URL errors). How do I block calls to this URL? I could change my port to a less common one but I would prefer not to.

The only way to "block" such requests from reaching your server would be to launch an AWS Web Application Firewall (AWS WAF) and configure appropriate rules.
AWS WAF only works in conjunction with Amazon CloudFront or an Elastic Load Balancer, so the extra effort (and expense) might not be worth the benefit of simply avoiding some lines in a log file.
One day I took a look at my home router's logs and I was utterly amazed to see the huge amount of bot attempts to gain access to random systems. You should be thankful if this is the only one getting through to your server!

Related

Application ELB - sticky sessions based on consistent hashing

I couldn't find anything in the documentation but still writing to make sure I did not miss it. I want all connections from different clients with the same value for a certain request parameter to end up on the same upstream host. With ELB sticky session, you can have the same client connect to the same host but no guarantees across different clients.
This is possible with Envoy proxy, see: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/load_balancers#ring-hash
We already use ELB so if the above is possible with ELB then we can avoid introducing another layer in between with envoy.
UPDATE:
Use-case - in a multi-tenant cloud solution, we want all clients from a given customer account to connect to the same upstream host.
Unfortunately this is not possible to be performed in an ALB.
An application load balancer controls all the logic over which host receives the traffic with features such as ELB sticky sessions and pattern based routing.
If there is no work around then you could look at a Classic Loadbalancer which has support for the application setting the sticky session cookie name and value.
From best practice ideally your application should be stateless, is it possible to look at rearchitecting your app instead of trying work around. Some suggestions I would have are:
Using DynamoDB to store any session based data, moving from a disk based session (if that's what your application does).
Any disk based files that need to persist could be shared between all hosts either using EFS for your Linux based hosts, or FSX on Windows.
Medium/Long term persisting files could be migrated to S3, any assets that rarely change could be stored here and then your application could use S3 rather than disk.
It's important to remember that as I stated above, you should keep your application as stateless as you can. Assume that your EC2 instances could fail, by preparing for this it will make it easier to recover.

Automatically block DOS attacks in AWS

I would like to know what is the best and the easiest solution
to protect http server deployed on AWS cloud against DOS attacks
I know that there is AWS Advanced Shield
that can be turned on for that purpose
however it is too expensive (3000$ per month)
https://aws.amazon.com/shield/pricing/
System architecture
HTTP request -> Application Load Balancer -> EC2
Nginx server is installed on this machine
Nginx server is configured with rate limiting
Nginx server responds with 429 code when too many requests are send from one IP
Nginx server is generating log files (access.log, error.log)
AmazonCloudWatchAgent is installed on this machine
AmazonCloudWatchAgent listen on log files
AmazonCloudWatchAgent send changes from log files to specific CloudWatch Log groups
Logs from all EC2 machines are centralized in on place (CloudWatch Log groups)
I can configure CloudWatch Logs Metric Filters
to send me alarms when too many 429 requests happen from one IP number
In that way I can manually block particular IP in Network ACL
and cut off all requests from bad IP number in a lower network layer
and protect my AWS resources from being drained
I would like to do it somehow automatically
What is the easiest and the cleanest way to do it?
Note that, per the AWS Shield pricing documentation:
AWS Shield Standard provides protection for all AWS customers from
common, most frequently occurring network and transport layer DDoS
attacks that target your web site or application at no additional
charge.
For a more comprehensive discussion on DDoS mitigation, see:
Denial of Service Attack Mitigation on AWS
AWS Best Practices for DDoS Resiliency
There is no one straightforward way to block DDOS to your infrastructure. However, there are a few techniques and best practices which you can follow to at least protect the infrastructure. DDOS attacks can be stopped by analyzing and patching it at the same moment.
You may consider using external services listed below to block ddos at some extent:
Cloudflare: https://www.cloudflare.com/en-in/ddos/
Imperva Incapsula:
https://www.imperva.com/products/ddos-protection-services/
I have tried both in the production system and they are pretty decent. Cloudflare is right now handling 10% of total internet traffic, they know about the good and bad traffic.
They are not much expensive comparative to shield. You may integrate it with your infrastructure as a code in order to automate for all of your services.
Disclaimer: I am not associated in any way with any of the services I recommended above.

Having load balancer before Akka Http multiple applications

I have multiple identical Scala Akka-HTTP applications, each one is installed on a dedicated server (around 10 apps), responding to HTTP requests on port 80. in front of this setup I am using single HAproxy instance that receives all the incoming traffic and balances the workload to these 10 servers.
We would like to change the HAproxy (we suspect that it causes us latency problems) and to use a different load balancer. the requirement is to adopt a different 3rd party load balancer or to develop a simple one using scala that round robin each http request to the backend akka http apps and to proxy back the response.
Is there another recommended load balancer (open source) that I can use to load balance / proxy the http incoming requests to the multiple apps other than HAproxy (maybe APACHE httpd) ?
Does it make sense to write a simple akka http application route as the loadbalancer, register the backend apps hosts in some configuration file, and to roundrobin the requests to them?
maybe I should consider Akka cluster to that purpose ? the thing is, that the applications are already standalone akka http services with no cluster support. and maybe it would be too much to go for clustering. (would like to keep it simple)
What is the best practice to load balance requests to http apps (especially akka http scala apps) as I might be missing something here?
Note - having back pressure is something that we also would like to have, meaning that if the servers are busy, we would like to response with 204 or some status code so our clients wont have timeouts in case my back end is busy.
Although Akka HTTP performance is quite impressive, I would not use it for writing a simple reverse proxy since there are tons of others out there in the community.
I am not sure where you deploy your app, but, the best (and more secure) approach is to use a LB provided by your cloud provider. Most of them has one and usually it has a good benefit-cost.
If your cloud provider does not provide one or you are hosting yourself your app, then first you should take a look on your HAProxy. Did you run tests on HAProxy in an isolated way to see it still has the same latency issues? Are you sure the config optimised for what you want? Does your HAProxy has enough resources (cpu and memory) to operate? Is your HAProxy in the same DataCenter as your deployed app?
If you follow and check all of these questions and still are having latency issues, then I would recommend you to choose another one. There are tons out there, such as Envoy and NGINX. I really like Envoy and I've been using it at work for a few months now without any complains.
Hope I could help.
[]'s

Scalable server hosting

I have simple server now (some xeon cpu hosted somewhere), running apache/php/mysql (no docker, but its a possibility) and Im expecting some heavy traffic and I need my server to handle that.
Currently the server can handle about 100 users at once, I need it to handle couple thousands possibly.
What would be easiest and fastest solution to move my app to some scalable hosting?
I have no experience with AWS or something like that.
I was reading about AWS and similar, but Im mostly confused and not sure what should I choose.
The basic choice is:
Scale vertically by using a bigger computer. However, you will eventually hit a limit and you will have a single-point of failure (one server!), or
Scale horizontally by adding more servers and spreading the traffic across the servers. This has the added advantage of handling failure because, if one server fails, the others can continue serving traffic.
A benefit of doing horizontal scaling in the cloud is the ability to add/remove servers based on workload. When things are busy, add more servers. When things are quiet, remove servers. This also allows you to lower costs when things are quiet (which is not possible on-premises when you own your own equipment).
The architecture involves putting multiple servers behind a Load Balancer:
Traffic comes into a Load Balancer
The Load Balancer sends the request to a server (often based upon some measure of how "busy" each server is)
The server processes the request and sends a response back to the Load Balancer
The Load Balancer sends the response to the original requester
AWS has several Load Balancers available, which vary by need. If you are simply sending traffic to a single application that is installed on all servers, a Network Load Balancer should be sufficient. For situations where different parts of the application are on different servers (eg mobile interface vs web interface), you could use a Application Load Balancer.
AWS also assists with horizontal scaling by providing the Amazon EC2 Auto Scaling service. This allows you to specify details of the servers to launch (disk image, instance type, network settings) and Auto Scaling can then automatically launch new servers when required and terminate ones that aren't required. (Note that they launch and terminate, not start and stop.)
You can further define scaling policies that tell Auto Scaling when to launch/terminate instances by measuring metrics such as CPU Utilization. This way, the number of servers can approximately match the volume of traffic.
It should be mentioned that if you have a database, it should be stored separately to the application servers so that it does not get terminated. You could use the Amazon Relational Database Service (RDS) to run a database for you, or you could run one on a separate Amazon EC2 instance.
If you want to find out more about any of the above technologies, there are plenty of talks on YouTube or blog posts that can explain and demonstrate their use.

Load balancer for php application

Questions about load balancers if you have time.
So I've been using AWS for some time now. Super basic instances, using them to do some tasks whenever I needed something done.
I have a task that needs to be load balanced now. It's not a public service though. It's pretty much a giant cron job that I don't want running on the same servers as my website.
I set up an AWS load balancer, but it doesn't do what I expected it to do.
It get's stuck on one server, and doesn't load balance at all. I've read why it does this, and that's all fine and well, but I need it to be a serious round-robin load balancer.
edit:
I've set up the instances on different zones, but no matter how many instances I add to the ELB, it just uses one. If I take that instance down, it switches to a different one, so I know it's working. But I really would like it to always use a different one under every circumstance.
I know there are alternatives. Here's my question(s):
Would a custom php load balancer be an ok option for now?
IE: Have a list of servers, and have php randomly select a ec2 instance. Wouldn't be scalable at all, bu atleast I could set this up in 2 mins and it can work for now.
or
Should I take the time to learn how HAProxy works, and set that up in place of the AWS ELB?
or
Am I doing it wrong, and AWS's ELB does do round-robin. I just have something configured wrong?
edit:
Structure:
1) Web server finds a task to do.
2) If it's too large it sends it off to AWS (to load balancer).
3) Do the job on EC2
4) Report back via curl to an API
5) Rinse and repeat
Everything works great. But because the connection always comes from my server (one IP) it get's sticky'd to a single EC2 machine.
ELB works well for sites whose loads increase gradually. If you are expecting an uncommon and sudden increase on the load, you can ask AWS to pre-warm it for you.
I can tell you I used ELB in different scenarios and it always worked well for me. As you didn't provide too much information about your architecture, I would bet that ELB works for you, and the case that all connections are hitting only one server, I would ask you:
1) Did you check the ELB to see how many instances are behind it?
2) The instances that you have behind the ELB, are all alive?
3) Are you accessing your application through the ELB DNS?
Anyway, I took an excerpt from the excellent article that does a very good comparison between ELB and HAProxy. http://harish11g.blogspot.com.br/2012/11/amazon-elb-vs-haproxy-ec2-analysis.html
ELB provides Round Robin and Session Sticky algorithms based on EC2
instance health status. HAProxy provides variety of algorithms like
Round Robin, Static-RR, Least connection, source, uri, url_param etc.
Hope this helps.
This point comes as a surprise to many users using Amazon ELB. Amazon
ELB behaves little strange when incoming traffic is originated from
Single or Specific IP ranges, it does not efficiently do round robin
and sticks the request. Amazon ELB starts favoring a single EC2 or
EC2’s in Single Availability zones alone in Multi-AZ deployments
during such conditions. For example: If you have application
A(customer company) and Application B, and Application B is deployed
inside AWS infrastructure with ELB front end. All the traffic
generated from Application A(single host) is sent to Application B in
AWS, in this case ELB of Application B will not efficiently Round
Robin the traffic to Web/App EC2 instances deployed under it. This is
because the entire incoming traffic from application A will be from a
Single Firewall/ NAT or Specific IP range servers and ELB will start
unevenly sticking the requests to Single EC2 or EC2’s in Single AZ.
Note: Users encounter this usually during load test, so it is ideal to
load test AWS Infra from multiple distributed agents.
More info at the Point 9 in the following article http://harish11g.blogspot.in/2012/07/aws-elastic-load-balancing-elb-amazon.html
HAProxy is not hard to learn and is tremendously lightweight yet flexible. I actually use HAProxy behind ELB for the best of both worlds -- the hardened, managed, hands-off reliability of ELB facing the Internet and unwrapping SSL, and the flexible configuration of HAProxy to allow me to fine tune how things hit my servers. I've never lost an HAProxy instance yet, but it I do, ELB will just take that one out of rotation... as I have seen happen when the back-end servers have all become inaccessible, which (because of the way it's configured) makes ELB think the HAProxy is unhealthy, but that's by design in my setup.