AWS ALB Request Count Per Route - amazon-web-services

I want the Requests Per Unit Time at route/path level metrics from AWS Lambda.
For example, path = /admin/options, Requests Per Second = 200.
My application supports many routes like this
I have looked into documentation but the Requests per unit time is not at route level.

As already pointed out, you have to do it by analyzing ALB access logs. This process is much simplified as Amazon Athena can query the ALB's logs directly in S3 as explained in Querying Application Load Balancer Logs.
This means that you don't have to download the logs before processing, or write any custom processing application. Instead you can run Amazon Athena queries against the logs.

Related

Multi region API/Lambda Architecture latency issue

We are trying to deploy our API Gateway/Lambda and route it through Route53 in following regions.
ap-south-1
Lambda
API Gateway + Certigicate for API Gateway + Custom Domain
us-east-1
Lambda
API Gateway + Certigicate for API Gateway + Custom Domain
DynamoDB
AWS Elastic Search Service
Our lambda(ap-south-1, us-east-1) connect to DynamoDB(us-east-1) and AWS Elastic search services(us-east-1) to fetch data.
When we test the lambda in us-east-1 it has 200ms of execution time.
But when we test the lambda in ap-south-1 it has around 3 seconds of execution time.
The logic is same in both the lambda. The only thing is it request dynamodb/Elastic Search service in us-east-1 from ap-south-1.
We want to understand why it takes around 3 seconds when lambda is executed from ap-south-1, since it is inter-region request in AWS Network infrastructure only.
What you are observing is a typical latency issue, since data store is too far from application.
And your architechure It is not Truly Multi-region. Even if you are in 2 region, your application is unusable if aws east goes down.
You should
Allow replication of dynamodb tables.
each lambda/applicaiton should hit only regional services and no cross region call.
Elastic search should be replicated using dynamodb streams.
If lambda is using sns and sqs, they should also hookup using dynamodb streams.
It will make sure
You will have low latency reads.
No issues if there is a regional outage.
But it will have issues like
Cost will be higher.
If writes are allowed from both the regions, race conditions might be there.
As others have already said it's probably a latency issue.
If you make multiple synchronous requests to a different region this latencies sums up.
To investigate further, you can try AWS X-Ray. Maybe it can give you some details on where latencies develop.
https://aws.amazon.com/it/xray/

How can I add ip-based rate limits with longer intervals on API Gateway?

I have an API Gateway endpoint that I would like to limit access to. For anonymous users, I would like to set both daily and monthly limits (based on IP address).
AWS WAF has the ability to set rate limits, but the interval for them is a fixed 5 minutes, which is not useful in this situation.
API Gateway has the ability to add usage plans with longer term rate quotas that would suit my needs, but unfortunately they seem to be based on API keys, and I don't see a way to do it by IP.
Is there a way to accomplish what I'm trying to do using AWS Services?
Is it maybe possible to use a usage plan and automatically generate an api key for each user who wants to access the api? Or is there some other solution?
Without more context on your specific use-case, or the architecture of your system, it is difficult to give a “best practice” answer.
Like most things tech, there are a few ways you could accomplish this. One way would be to use a combination of CloudWatch API logging, Lambda, DynamoDB (with Streams) and WAF.
At a high level (and regardless of this specific need) I’d protect my API using WAF and the AWS security automations quickstart, found here, and associate it with my API Gateway as guided in the docs here. Once my WAF is setup and associated with my API Gateway, I’d enable CloudWatch API logging for API Gateway, as discussed here. Now that I have things setup, I’d create two Lambdas.
The first will parse the CloudWatch API logs and write the data I’m interested in (IP address and request time) to a DynamoDB table. To avoid unnecessary storage costs, I’d set the TTL on the record I’m writing to my DynamoDB table to be twice whatever my analysis’s temporal metric is... ie If I’m looking to limit it to 1000 requests per 1 month, I’d set the TTL on my DynamoDB record to be 2 months. From there, my CloudWatch API log group will have a subscription filter that sends log data to this Lambda, as described here.
My second Lambda is going to be doing the actual analysis and handling what happens when my metric is exceeded. This Lambda is going to be triggered by the write event to my DynamoDB table, as described here. I can have this Lambda run whatever analysis I want, but I’m going to assume that I want to limit access to 1000 requests per month for a given IP. When the new DynamoDB item triggers my Lambda, the Lambda is going to query the DynamoDB table for all records that were created in the preceding month from that moment, and that contain the IP address. If the number of records returned is less than or equal to 1000, it is going to do nothing. If it exceeds 1000 then the Lambda is going to update the WAF WebACL, and specifically UpdateIPSet to reject traffic for that IP, and that’s it. Pretty simple.
With the above process I have near real-time monitoring of request to my API gateway, in a very efficient, cost-effective, scaleable manner in a way that can be deployed entirely Serverless.
This is just one way to handle this, there are definitely other ways you could accomplish this with say Kinesis and Elastic Search, or instead of logs you could analyze CloudTail events, or by using a third party solution that integrates with AWS, or something else.

AWS WAF - Auto Save Web Application Firewall logs in S3

How do you route AWS Web Application Firewall (WAF) logs to an S3 bucket? Is this something I can quickly do through the AWS Console? Or, would I have to use a lambda function (invoked by a CloudWatch timer event) to query the WAF logs every n minutes?
UPDATE:
I'm interested in the ACL logs (Source IP, URI, Matches rule, Request Headers, Action, Time, etc).
UPDATE (05/15/2017)
AWS doesn't provide an easy way to view/parse these logs. You can get a "random sample" via the get-sampled-requests command. Which isn't acceptable...
Gets detailed information about a specified number of requests--a
sample--that AWS WAF randomly selects from among the first 5,000
requests that your AWS resource received during a time range that you
choose. You can specify a sample size of up to 500 requests, and you
can specify any time range in the previous three hours.
http://docs.aws.amazon.com/cli/latest/reference/waf/get-sampled-requests.html
Also, I'm not the only one experiencing this issue either:
https://forums.aws.amazon.com/thread.jspa?threadID=220202
I was looking for this functionality today and stumbled across the referenced thread. It was, coincidentally, updated today:
Hello,
Thanks for your input. I have submitted a feature request on your
behalf to export WAF events to S3 for long term analysis.
Best Regards, albertpataws
The lack of this feature strikes me as being almost as odd as the fact that I can't change timezones for graphs.

Using DynamoDB to replace logfiles

We are hosting our services in AWS beanstalk managed instances. That is forcing us to move away from files based logging to use database based logging.
Is DynamoDB a good choice for replacing file based logging. If so, what should be the primary key. I thought of using timestamp but multiple messages may be logged by the same service within the same timeStamp so that might not be reliable.
Any advice would be appreciated.
Don't use DynamoDB to store logs. You'll be paying for throughput and space needlessly.
Amazon CloudWatch has built-in logging capabilities.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/WhatIsCloudWatchLogs.html
Another alternative is a dedicated logging service such as Loggly which is cloud-based and can receive logs in many common formats, plus they have an API to send custom logs. In the web-based console, you can search and filter through the logs.
As an alternative, why don't you use cloudwatch? I ended up writing a whole app to consolidate logs across ec2 instances in a beanstalk app, then last year AWS opened up cloudwatch as a service, so I junked my stuff. You tell cloudwatch where your logs are on the instance, give it a log group and stream name, and all your logs are consolidated in one spot, in cloudwatch. You can also run alarms off them using the standard AWS setup. It's pretty slick, and easy - don't have to write a front end to do lookups, it's already there.
Don't know what you're using for logging - we are a node.js shop, used winston for logging, and there is a nice NPM module that works with Winston to log automatically, called winston-cloudwatch.

Can I use AWS CloudWatch to hit a status URI?

Is it possible to use CloudWatch or other AWS services to hit a URI, e.g. www.mysite.com/status, and send me error alerts when that doesn't return a 200 result? I want service-level monitoring for a small site (and don't want to do any work).
Ideally, I'd like to hit the /status endpoint on a particular EC2 host, with the HTTP hostname parameter set.
Thanks in advance.
edit: I recall something similar is available in auto-scaling groups, where hosts are automatically taken down if they don't meet health checks. I'm looking for something similar, but I just want email, not hosts taken down. (Since I'm working on small sites on a shared host.)
You can't do it directly from CloudWatch, but you could set up a monitor on a separate server, construct the test, and then send a custom metric to CloudWatch using the CLI tools. Custom metrics (and the CloudWatch CLI) are covered here:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html
From a separate server you could then run a simple script which tries to load your health page, and sends 0 for healthy, 1 for unhealthy, or whatever works for you, to CloudWatch.
Doing this with CloudWatch and SNS is not straightforward. You could do it with Route 53 and DNS failover, but for what you need, have a look at Pingdom. They have a free plan somewhere if you search for it.