I'm curious about using API Gateway resource policies to only allow a subset of IPs to access it. I am wondering, if someone outside of this IP range would spam the endpoint, would that still incur costs or do you only pay for "non-rejected" requests?
Thanks
would that still incur costs or do you only pay for "non-rejected" requests?
You do not pay for rejected requests. I have worked with the developers to confirm the code that triggers the charges executes only after the request gets past the access controls
Related
Say I have an API call. It originates in a lambda that is in account 1234. It updates AWS resources in account 9876. Which account is hit for the SDK API limit? How is that determined?
I'm trying to see how scalable a management approach would be. I want to have a management account work against resources elsewhere. However, if the management account maxes out on API limits then I need to figure something else out. In thinking about AWS's existing multi-account systems, particularly Control Tower, it will stand up CloudFormation templates in other accounts. I'm not sure if that's a clue or just one of their products utilizing another one that naturally solves a kind of problem.
My use case doesn't let me simply utilize CloudFormation StackSets. The main reason for that is I need to manage 3rd party accounts and asking for delegated admin permissions is too permissive, and they only get 5 anyway.
Whenever you use API calls to perform actions on resources, the limits in the account that the resources live in are used. In almost all cases (one exception: S3 in requester pays mode) that's also the account that will have to pay for any resource usage, so it makes sense to have the limits that protect the account there.
In the Amazon S3 pricing list I saw that requests to the bucket costs money.
If I configure my bucket to be private, hence other users would get 403 in case of request to my bucket, would requests like this cost me money?
I've found AWS Forum: Charges for "403 Forbidden" and "404 Not Found" from more than a decade ago which explains the answer is yes, does anyone else knows if that's still the case?
It sounds very strange to me, especially considering there are many automatic tools that scan for buckets (and that's not even considering the case of intentional attack against specific bucket).
Thanks.
Looking for budget friendly IDS/IPS for my servers.The time frame would be once in three months. Are there any services i can use to run a test once in 3 months and pay for that particular period of time
There are services like AWS Shield and AWS WAF that you can use for IDS/IPS.
AWS Shiled
AWS Shield is a managed Distributed Denial of Service (DDoS)
protection service that safeguards applications running on AWS. AWS
Shield provides always-on detection and automatic inline mitigations
that minimize application downtime and latency, so there is no need to
engage AWS Support to benefit from DDoS protection. There are two
tiers of AWS Shield - Standard and Advanced.
AWS WAF
AWS WAF is a web application firewall that helps protect your web
applications or APIs against common web exploits that may affect
availability, compromise security, or consume excessive resources. AWS
WAF gives you control over how traffic reaches your applications by
enabling you to create security rules that block common attack
patterns, such as SQL injection or cross-site scripting, and rules
that filter out specific traffic patterns you define. You can get
started quickly using Managed Rules for AWS WAF, a pre-configured set
of rules managed by AWS or AWS Marketplace Sellers. The Managed Rules
for WAF address issues like the OWASP Top 10 security risks. These
rules are regularly updated as new issues emerge. AWS WAF includes a
full-featured API that you can use to automate the creation,
deployment, and maintenance of security rules.
You can also buy third-party software that you can run on EC2 instances for IDS/IPS.
Intrusion Detection & Prevention Systems
EC2 Instance IDS/IPS solutions offer key features to help protect your
EC2 instances. This includes alerting administrators of malicious
activity and policy violations, as well as identifying and taking
action against attacks. You can use AWS services and third party
IDS/IPS solutions offered in AWS Marketplace to stay one step ahead of
potential attackers.
I have a S3 bucket which holds a generated sitemap file, which needs to be publicly accessible. I'm afraid if someone finds out about the url and DDOSes it, it could cost me a fortune. Is there a way to rate limit the requests per second accessing a S3 bucket?
You can go for Content Delivery Network (CDN). With a CDN that specializes in DDOS e.g. you can setup a webservice to feed the S3 files and cache based on querystring
2) You can use API Gateway infront of your S3 Request to limit number of request. But i am afraid, incase of any DDOS attack, you will lock down the real users from making request
3) using CDN with WAF (Web Application Firewall) where you can define rules to safeguard from DDoS attacks. I am afraid if it will work directly with S3, but using a combination of Cloud Front or Cloud Watch logs you can implement this.
Reference
If it is your personal AWS and you have access to the billing alerts and budgets, you can set up an alarm to notify at a threshold and stop at a threshold for a particular service like S3
Using AWS budgets to stop a services
I'm working on a website that contains photo galleries, and these images are stored on Amazon S3. Since Amazon charges like $0.01 per 10k GET-requests, it seems that a potential troll could seriously drive up my costs with a bot that makes millions of page requests per day.
Is there an easy way to protect myself from this?
The simplest strategy would be to create randomized URLs for your images.
You can serve these URLs with your page information. But they cannot be guessed by the bruteforcer and will usually lead to a 404.
so something like yoursite/images/long_random_string
Add aws Cloudfront service for your S3 object images. So it will retrieve the cached data from the edge location.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/MigrateS3ToCloudFront.html
As #mohan-shanmugam pointed out, you should use a CloudFront CDN with your origin as the S3 bucket. It is considered bad practice for external entities to hit S3 buckets directly.
http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
With a CloudFront distribution, you can alter your S3 bucket's security policy to only allow access from the distribution. This will block direct access to S3 even if the URLs are known.
In reality, you would likely suffer from website performance way before needing to worry about additional charges as a direct DDOS attempt against S3 should result in AWS throttling API requests.
In addition, you can set up AWS WAF in front of your CloudFront distribution and use it for advanced control of security related concerns.