What happens if a botnet uses http requests to drain my Amazon AWS server? - amazon-web-services

Im going to launch an app and Im worried if my competitors would just kill me by draining my Amazon AWS resources by using a botnet to send gibberish http requests to my Amazon AWS Account. I only got a few thousand dollars and I can not afford to be slaughtered like that.
In what other ways my competitors or haters could drain my server resources to drain my bank balance and how to prevent it?
please help. Im in very stressful situation where I cant get any answer for this question. Any suggestion is welcome.
Thanks.

As pointed by #morras, AWS Shield + WAF is good combination to protect your resources from spam requests. Since you have not given your architecture about what aws services you are actually using, I am trying to answer based on general term.
In AWS Shield there are two types
Standard - Automated mitigation techniques are built-into AWS Shield Standard, giving you protection against common, most frequently occurring infrastructure attacks. If you have technical expertise to create rules based on your request, you can go with this.
Advanced - AWS WAF comes free with this, and you will have 24x7 access to the AWS DDoS Response Team (DRT), support experts who apply manual mitigations for more complex and sophisticated DDoS attacks, directly create or update AWS WAF rules, and can recommend improvements to your AWS architectures.It also includes some cost protection against Amazon EC2, Elastic Load Balancing, Amazon CloudFront, and Amazon Route 53 usage spikes that could result from scaling during a DDoS attack
Please take a look at design resilient architecture in aws to mitigate DDOS.
update: If the AWS Shield Advanced team determines that the incident is a valid DDoS attack and that the underlying services scaled to absorb the attack, AWS provides account credit for charges incurred due to the attack. For example, if your legitimate CloudFront data transfer usage during the attack period was 20 GB, but due to the attack you incurred charges for 200 GB of incremental data transfer, AWS provides credit to offset the incremental data transfer charges. AWS automatically applies all credits toward your future monthly bills. Credits are applied towards AWS Shield and cannot be used for payment for other AWS services. Credits are valid for 12 months.
The services covered as per doc are Amazon CloudFront, Elastic Load Balancing, Route 53 or Amazon EC2 . Please check with AWS support, whether your services are covered or not.

There are a couple of options available. First of AWS provides AWS Shield which is a DDoS protection service. The standard subscription is free and covers most frequently occurring network and transport layer DDoS attacks.
On top of that you can consider using AWS WAF - Web Application Firewall which allows you to setup rules for what traffic to allow to your servers.
You can also use API gateway in front of your service and set throttling limits on how much traffic to allow through.
However I would question if you really need this? It sounds like you are worried that you would run up a huge AWS bill if competitors start sending you millions of requests. You can setup billing alerts so when your forecasted bill exceeds a specific threshold you are warned and you can either manually shut down the services that are being bombarded and figure out what the attacks look like, or you can have an automatic response via CloudWatch. I believe that you will find that you will not be under attack and that you should not worry too much on this attack vector at this time.

Related

AWS load balancer log analyzer

I'm new to AWS wolrd. My purpose is to find as soon as possible in case of problems using Elastic Load Balancer logs top ips from requests, if possible who they are or some inspection on it. I only found paid services. Does anyone know a free application or maybe a website that analyzes AWS ELB logs?
Completely free solution isn't available as I know. Btw, there are cheap solutions.
You can monitor your load balancer by "Access logs", "CloudWatch metrics", "Request tracing" and "CloudTrail logs".
I don't understand exactly what you want, but there are some possible solutions.
If you're afraid of being attacked and you need immediate protection (against security scans, DDoS etc), you can use AWS's own services. "AWS Shield Standard" is automatically included at no extra cost. Btw, "For added protection against DDoS attacks, AWS offers AWS Shield Advanced". https://docs.aws.amazon.com/shield/
WAF is also good against attacks. You can create rules, rule-actions etc. Sadly it's not completely free. It runs "pay-as-you-use" style. https://aws.amazon.com/waf/pricing/
you can store the access log in S3 and analyse it later, but this can be costly in the end (and it's not real time)
you can analyse your log records with Lambda function. In this case, you need to use some NoSQL or something to store states or logics. (Lambda and DynamoDB is "pay-as-you-use" style and cheap, but not for free)
Keep in mind that:
The load balancer and lambda also increments the corresponding CloudWatch metric (it's cheap, but not for free)
You will pay for the outgoing data transfer. I mean from AWS to internet 1TB/month/account is always free (through CloudFront): https://aws.amazon.com/free/
you should use AWS's own services if you want a cheap and good solution
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses.
But keep in mind that access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.
There are many complex and paid application that returns information regard access log but i advise you a simple, easy to use website that i use when i want to see top requester on our load balancer.
Website is https://vegalog.net
You shoud only upload your log file taken from S3 bucket and it returns to you a report with top requester, who they are (using whois function), response time and other useful informations.

Are requests to AWS’ DescribeInstances endpoint free?

I'm working on a series of tutorials that rely on AWS EC2 instances. I'd like to give users a chance to play around with a limited AWS environment.
DescribeInstances is the only endpoint I need for that. However, I'd like to make sure that the possibility of someone spamming that endpoint with thousands/millions of requests won't incur thousand dollar charges on my account.
I tried asking someone at work about it, and they said they've never been charged for Describe requests. However, I'd like some more confirmation on that, which is why I'm asking this question.
NOTE: I've tried asking AWS support, but they are very slow to respond.
The Amazon EC2 pricing page has no mention of request-based charges. This differs from other services (for example, Amazon S3) that does specifically mention a request charge.
Therefore, it would seem that there is no per-request charge for Amazon EC2.

AWS vs GCP Cost Model

I need to make a cost model for AWS vs GCP. Currently, our organization is using AWS. Our biggest services used are:
EC2
RDS
Labda
AWS Gateway
S3
Elasticache
Cloudfront
Kinesis
I have very limited knowledge of cloud platforms. However, I have access to:
AWS Simple Monthly Calculator
Google Cloud Platform Pricing Calculator
MAP AWS services to GCP products
I also have access to CloudHealth so that I can get a breakdown of costs per services within our organization.
Of the 8 major services listed above are main usage and costs go to EC2, S3, and RDS.
Our director of engineering mentioned that I should be most concerned with vCPU and memory.
I would appreciate any insight (big or small) that people have into how I can go about creating this model, any other factors I should consider, which functionalities of the two providers for the services are considered historically "better" or cheaper, etc.
Thanks in advance, and any questions people may have, I am more than happy to answer.
-M
You should certainly cost-optimize your resources. It's so easy to create cloud resources that people don't always think about turning things off or right-sizing them.
Looking at your Top 5...
Amazon EC2
The simplest way to save money with Amazon EC2 is to turn off unused resources. You can even stop instances overnight and on the weekend. If they are only used 8 hours per workday, then that is only 40 out of 168 hours, so you can save 75% by turning them off when unused! For example, Dev and Test instances. People have written various types of automated utilities to turn instances on and off based on tags. Try search the Internet for AWS Stopinator.
Another way to save money on Amazon EC2 is to use spot instances. They are a fraction of the price, but have a risk that they might be turned off when demand increases. They are great where it is okay for systems to be terminated sometimes, such as automated testing systems. They are also a great way to supplement existing capacity at a fraction of the price.
If you definitely need the Amazon EC2 instances to keep running all the time, purchase Amazon EC2 Reserved Instances, which also offer a price saving.
Chat with your AWS Account Manager for help with the above options.
Amazon Relational Database Service (RDS)
Again, Amazon RDS instances can be stopped overnight/on weekends and turned on again when needed. You only pay while the instance is running (plus storage costs).
Examine the CloudWatch metrics for your RDS instances and determine whether they can be downsized without impacting applications. You can even resize them when they are used less (eg over weekends). Everything can be scripted, so you could trigger such downsizing and upsizing on a schedule.
Also look at the Engine used with RDS. Commercial offerings such as Oracle and Microsoft SQL Server are more expensive than open-source offerings like MySQL and PostgreSQL. Yes, your applications might need some changes, but the cost savings can be significant.
AWS Lambda
It is most unusual that Lambda is #3 in your list. In fact, some customers never get a charge for Lambda because it falls in the monthly free usage tier. Having high charges means you're making good use of Lambda (which is saving you EC2 costs), but take a look at which applications are using it the most and see whether they are using it wisely.
When correctly used, a Lambda function should only ever run for a few seconds, so check whether any application seem to be using it outside this pattern.
AWS API Gateway
Once again, these costs tend to be low ($3.50/million calls) so again I'd recommend trying to figure out how this is being used. If you really need that many calls, it would also explain the high Lambda costs. It would probably be more expensive if you were providing such functionality via Amazon EC2.
Amazon S3
Consider using different Storage Classes to reduce your costs. Costs can be reduced by:
Moving infrequently-accessed data to a different storage class
Moving data to One-Zone (if you have a copy of the data elsewhere, so don't need the redundancy)
Archiving infrequently-accessed data to Amazon Glacier, which offers much cheaper storage but does not have instant access
With GCP, you can benefit by receiving discounts such as the Committed Use Discount and the Sustained Use Discount.
With a Committed Use Discount, you can receive a discount of up to 70% if your usage is predictable.
With the Sustained Use Discount, there is an incremental discount if you reach certain usage thresholds.
On your concern with vCPU and memory, you may use predefined machine types. They are cheaper than custom machine types.
Lastly, you can also test the charges by trying out the Google Cloud Platform Free Tier.

How to ensure AWS Elastic Beanstalk is free

I am wanting to deploy a Django webapp with a PostgreSQL database to AWS Elastic Beanstalk using this tutorial, but I am so confused about pricing. It says it uses services in the AWS Free Tier, but those seem to be limited to a certain number of hours a month, so how do I make sure I don't go above that threshold? And how do I make sure I'm only using free services? They even require a card on file, so it seems really hard to make sure I don't get charged.
You can do the following configuration to make sure you use AWS Elastic Beankstalk for one year free.
Use only Micro instances for the WebServer and RDS instance.
Limit the scaling of the WebServer maximum to 1 or use Standalone deployment without autoscaling.
When selecting storage, use less than 30GB for EBS and don't enable Provision Throughput.
Apart from these, there are usage base costs for Network, EBS IOPS & etc which includes a free quota and the cost is not considerable when it comes to light use cases.
The AWS Free Tier allows AWS accounts to use a certain amount of services for no charge. Any usage beyond the free tier limits will result in a charge on your credit card.
The Free Tier is intended to provide a trial of AWS services. It is not intended for production use, nor is there any guaranteed way to stay within the free limits. It is up to you to monitor your usage.
There is no such thing as a totally free AWS account.
I have found "Cost Management Preferences" -> "Receive Free Tier Usage Alerts" setting in Billing preferences menu. Hopefully this will be enough for a small personal projects with low usage. I would guess it is not enough for large projects since this is only a notification.
In short, you can absolutely make sure that your app stays free, just not from within the AWS interface. You'll have to use your own usage monitoring to ensure you stay within the free limits as others state.
As Ashan said, this is a pretty silly approach since fees are nominal and the alternative is a loss of service, however, AWS does offer APIs to help you do this through CloudWatch.
CloudWatch exposes pretty much all of the billable metrics on a service-by-service basis, for example here are the metrics for EC2, and here are the metrics for S3. After starting your services through beanstalk, just look up all the services you're using via the billing page of the AWS console, look up the CloudWatch APIs for each, then check them.
At least for EC2, there are even customizable alarms and actions, including shutting down the instance. See the Monitoring tab at the bottom of the EC2 console. Not sure, but you might have to manually throw status updates to their status system for some of the other metrics. If so, it's not that difficult. You'd set up an access key for some IAM identity so you can check CloudWatch stuff from command line. Then, you'd write a watchdog script to run on that instance using AWSCLI to regularly ping CloudWatch and call your shutdown code or modify your status if you're over some percentage of your quota.

amazon aws billing clients per usage

suppose you have an app on aws and you want to charge for storage to clients for each gb they use. is there a way to get this info from amazon or collect it yourself if you are using your own aws account for this (clients have no amazon aws accounts).
for example: 10gb spent at the end of the month. have to charge it. how to figure out what to bill each of the 5 clients?
can amazon give this info? if amazon can't provide this, how to do it?
same question for storage / bandwidth and processing time.
basically do what amazon does :P
even if that is hard, how to ensure if you sell a package of 1gb / month (storage example) that the customer doesn't go over. any patterns for handling this (as in code patterns i can use)?
Amazon provides a service that I think does exactly what you want called "DevPay" that has the ability to track and charge users S3 usage.
http://aws.amazon.com/devpay/
From the DevPay documentation:
"Amazon DevPay is a simple-to-use online billing and account management service that makes it easy for businesses to sell applications that are built in, or run on top of, Amazon Web Services. It is designed to make running applications in the cloud and on demand easier for developers."
If you can't use this for some reason then it's up to you track users usage within your application...