I'm located in the EU, and I have an edge-optimized REST API Gateway deployed in us-east-1. My API Gateway proxies all requests to a lambda function, again located in us-east-1.
So I'm assuming whenever I send a request to this API Gateway, I'm being served from a nearby location (such as eu-central-1, etc.) instead of us-east-1 because of Edge functionality.
Actually, I also tested this behavior by changing the API type to "regional" and and then back to "edge-optimized", and observed the following results:
Regional:
DNS Lookup TCP Connection TLS Handshake Server Processing Content Transfer
[ 6ms | 113ms | 247ms | 141ms | 1ms ]
| | | | |
namelookup:6ms | | | |
connect:119ms | | |
pretransfer:366ms | |
starttransfer:507ms |
total:508ms
As you can see, TCP connection and TLS handshake are taking longer, as expected, when I'm using the Regional type because of the round trips between the EU and US.
Edge-Optimized:
When using the Edge-Optimized type, I noticed that server processing was taking too long, but TCP and TLS parts are fast:
DNS Lookup TCP Connection TLS Handshake Server Processing Content Transfer
[ 5ms | 7ms | 17ms | 351ms | 1ms ]
| | | | |
namelookup:5ms | | | |
connect:12ms | | |
pretransfer:29ms | |
starttransfer:380ms |
total:381ms
According to my theory, my requests go to a nearby API Gateway edge such as eu-central-1, but the latency between this edge node and the lambda function located in us-east-1 is high. Therefore server processing part is taking too long.
How do edge-optimized API Gateways deal with lambdas only deployed in a single location?
What's the best practice to speed up this endpoint?
Thanks!
Related
I am trying to test my lambda function locally using sam local invoke. Error says UnknownEndpoint: Inaccessible host: secretsmanager.us-east-1.amazonaws.com' at port undefined'
This error is being thrown from inside my lambda function code as that is were I am pull secrets from. I have tried using --region --profile options as well but no luck.
For context, I am using terraform to design and deploy my infrastructure. Using SAML Authorization with Credentials file for AWS Access to our VPC environment. I have verified the region is being set correctly when SAM spins up the Lambda docker container. I have also verified that I am providing the same parameters for Lambda to identify secrets manager as the one running in the VPC version.
This only thing that I see odd is the port being undefined in console that seems that it is coming internally from the AWS SDK. Note that when I used the secrets manager terraform module that has been created by our company's cloud engineering team, I don't have to provide any port information. Hope someone can help explain this issue error.
USACCMNBSTEMD6R:balance-inquiry czl74b$ sam local invoke -t ./sam-local/template.yaml -e ./sam-local/event.json --debug
2022-01-06 17:23:29,736 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-01-06 17:23:29,736 | Using config file: samconfig.toml, config environment: default
2022-01-06 17:23:29,736 | Expand command line arguments to:
2022-01-06 17:23:29,736 | --template_file=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml --event=./sam-local/event.json --no_event --layer_cache_basedir=/Users/czl74b/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1
2022-01-06 17:23:29,736 | local invoke command is called
2022-01-06 17:23:29,743 | No Parameters detected in the template
2022-01-06 17:23:29,761 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,761 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,761 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,761 | 3 stacks found in the template
2022-01-06 17:23:29,762 | No Parameters detected in the template
2022-01-06 17:23:29,774 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,774 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,774 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,775 | 3 resources found in the stack
2022-01-06 17:23:29,775 | No Parameters detected in the template
2022-01-06 17:23:29,790 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,790 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,790 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,790 | No Parameters detected in the template
2022-01-06 17:23:29,802 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,802 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,803 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,803 | --base-dir is not presented, adjusting uri ../../../../common-utils relative to /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml
2022-01-06 17:23:29,803 | No Parameters detected in the template
2022-01-06 17:23:29,815 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,815 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,815 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,815 | --base-dir is not presented, adjusting uri ../../../../npm-libs relative to /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml
2022-01-06 17:23:29,815 | Found Serverless function with name='BalanceInquiry' and CodeUri='../'
2022-01-06 17:23:29,816 | --base-dir is not presented, adjusting uri ../ relative to /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml
2022-01-06 17:23:29,840 | Found one Lambda function with name 'BalanceInquiry'
2022-01-06 17:23:29,840 | Invoking main.handler (nodejs14.x)
2022-01-06 17:23:29,840 | Environment variables overrides data is standard format
2022-01-06 17:23:29,840 | Loading AWS credentials from session with profile 'None'
2022-01-06 17:23:29,850 | Resolving code path. Cwd=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local, CodeUri=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry
2022-01-06 17:23:29,850 | Resolved absolute path to code is /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry
2022-01-06 17:23:29,850 | Code /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry is not a zip/jar file
2022-01-06 17:23:29,850 | Code /Users/czl74b/dev-js/lending-api-innovation/src/common-utils is not a zip/jar file
2022-01-06 17:23:29,850 | Code /Users/czl74b/dev-js/lending-api-innovation/src/npm-libs is not a zip/jar file
2022-01-06 17:23:29,850 | CommonUtils is a local Layer in the template
2022-01-06 17:23:29,850 | Resolving code path. Cwd=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local, CodeUri=/Users/czl74b/dev-js/lending-api-innovation/src/common-utils
2022-01-06 17:23:29,850 | NpmLibs is a local Layer in the template
2022-01-06 17:23:29,850 | Resolving code path. Cwd=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local, CodeUri=/Users/czl74b/dev-js/lending-api-innovation/src/npm-libs
2022-01-06 17:23:29,851 | arn:aws:lambda:us-east-1:027255383542:layer:AWS-AppConfig-Extension:55 is already cached. Skipping download
Building image................................
2022-01-06 17:23:41,146 | Skip pulling image and use local one: samcli/lambda:nodejs14.x-x86_64-d5b52b0afc3579e405e95c7df.
2022-01-06 17:23:41,146 | Mounting /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry as /var/task:ro,delegated inside runtime container
2022-01-06 17:23:41,598 | Starting a timer for 3 seconds for function 'BalanceInquiry'
START RequestId: 3b9f7abb-02d1-46e8-8b6b-321f9e5467ed Version: $LATEST
2022-01-07T00:23:43.539Z 3b9f7abb-02d1-46e8-8b6b-321f9e5467ed INFO getSecrets :: getSecretValue Error: UnknownEndpoint: Inaccessible host: `secretsmanager.us-east-1.amazonaws.com' at port 'undefined'. This service may not be available in the `us-east-1' region.
SAM local invoke runs the lambda function as a docker container. If behind corporate proxy, AWS SDK from this lambda needs proxy setup to communicate with the actual AWS Services. I was able to resolve by using the proxy-agent npm module. You can read about it here.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/node-configuring-proxies.html
Here is how this looked like in the code.
const AWS = require('aws-sdk');
const { HTTP_PROXY, LOCAL } = process.env;
if(LOCAL === 'TRUE'){
// lazy load proxy-agent only in LOCAL for sam local testing
const proxy = require('proxy-agent');
AWS.config.update({ httpOptions: { agent: proxy(HTTP_PROXY>) }});
}
How to use AWS services like CloudTrail or CloudWatch to check which user performed event DeleteObject?
I can use S3 Event to send a Delete event to SNS to notify an email address that a specific file has been deleted from the S3 bucket but the message does not contain the username that did it.
I can use CloudTrail to log all events related to an S3 bucket to another bucket, but I tested and it logs many details, and only event PutObject but not DeleteObject.
Is there any easy way to monitor an S3 bucket to find out which user deleted which file?
Upate 19 Aug
Following Walt's answer below, I was able to log the DeleteObject event. However, I can only get the file name (requestParameters.key
) for PutObject, but not for DeleteObjects.
| # | #timestamp | userIdentity.arn | eventName | requestParameters.key |
| - | ---------- | ---------------- | --------- | --------------------- |
| 1 | 2019-08-19T09:21:09.041-04:00 | arn:aws:iam::ID:user/me | DeleteObjects |
| 2 | 2019-08-19T09:18:35.704-04:00 | arn:aws:iam::ID:user/me | PutObject |test.txt |
It looks like other people have had the same issue and AWS is working on it: https://forums.aws.amazon.com/thread.jspa?messageID=799831
Here is my setup.
Detail instructions on setting up CloudTrail in the console. When setting up the CloudTrail double check these 2 options.
That your are logging S3 writes. You can do this for all S3 buckets or just the one you are interested. You also don't need to enable read logging to answer this question.
And you are sending events to CloudWatch Logs
If you made changes to the S3 write logging you might have to wait a little while. If you haven't had breakfast, lunch, snack, or dinner now would be a good time.
If you're using the same default CloudWatch log group as I have above this link to CloudWatch Insight Logs search should work for you.
This is a query that will show you all S3 DeleteObject calls. If the link doesn't work
Got to CloudWatch Console.
Select Logs->Insights on the left hand side.
Enter value for "Select log group(s)" that you specific above.
Enter this in the query field.
fields #timestamp, userIdentity.arn, eventName, requestParameters.bucketName, requestParameters.key
| filter eventSource == "s3.amazonaws.com"
| filter eventName == "DeleteObject"
| sort #timestamp desc
| limit 20
If you have any CloudTrail S3 Delete Object calls in the last 30 min the last 20 events will be shown.
As of 2021/04/12, CloudTrail does not record object key(s) or path for DeleteObjects calls.
If you delete an object with S3 console, it always calls DeleteObjects.
If you want to access object keys for deletion you will need to delete individual files with DeleteObject (minus s). This can be done with AWS CLI (aws s3 rm s3://some-bucket/single-filename) or direct API calls.
An AWS SQS queue URL looks like this:
sqs.us-east-1.amazonaws.com/1234567890/default_development
And here are the parts broken down:
Always same | Stored in env var | Always same | ? | Stored in env var
sqs | us-east-1 | amazonaws.com | 1234567890 | default_development
So I can reconstruct the queue URL based on things I know except the 1234567890 part.
What is this number and is there a way, if I have my AWS creds in env vars, to get my hands on it without hard-coding another env var?
The 1234567890 should be your AWS account number.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ImportantIdentifiers.html
If you don't have access to the queue URL directly (e.g. you can get it directly from CloudFormation if you create it there) you can call the GetQueueUrl API. It takes a parameters of the QueueName and optional QueueOwnerAWSAccountId. That would be the preferred method of getting the URL. It is true that the URL is a well formed URL based on the account and region, and I wouldn't expect that to change at this point. It is possible that it would be different in a region like the China regions, or the Gov Cloud regions.
I have added a route 53 entry A.B.com and it points to my ELB.
I set up a script that makes http calls to this end point - http://A.B.com/xxx
However, I see that every 70th or 80th request is throwing a "connection timeout" [not a read timeout]
Can anyone help me how to debug this issue?
Here is my script -
#!/bin/bash
n=0
while [ $n -le 5 ]
do
curl -s -D - "http://A.B.com/xxx"
done
Note
I do not use VPC
CloudWatch
HTTPCode_Backend_5XX ~ 200 (mean)
SurgeQueueLength ~ 1000 (mean)
Do these parameters in CloudWatch mean my backend is unhealthy..?
I changed the ELB for the time being.. things are ok as of now.. but I am not sure I found the root cause..
I'm working to migrate our servers on the Amazon Cloud, reasons are auto-scaling possibilities, costs, services, and more, obviously.
So far, I'm experimenting hard and trying to dive in the full-featured documentations, but with no previous experience I've much questions.
The envisaged infrastructure is the following:
+-----+
| ELB |
+--+--+
|
+--------------------|--------------------+
| Auto-Scaling Group |
|--------------------|--------------------|
| | |
| +---------+ | +---------+ |
| | varnish |<------+------>| varnish | |
| +----+----+ +---------+ |
| | | |
+-----------------------------------------+
| |
| |
| +------------+ |
+---->|Internal ELB|<-----+
+------+-----+
|
+-----------------------------------------+
| Auto-Scaling Group |
|-----------------------------------------|
| +---------+ | +---------+ |
| | Apache |<------+------>| Apache | |
| +----+----+ +----+----+ |
| | | |
+-----------------------------------------+
| +-----+ |
+-------->| RDS |<--------+
+-----+
In words, I would have Elastic LoadBalancer which will send traffic to the Varnish instances, which will in turn send the traffic to an internal Elastic LoadBalancer which will send the traffic to the Apache frontends.
For now, I've discovered the AWS tools, like the CloudFormation service which seems able to bootstrap instance given a template, this seems great, but it seems able to bootstrap only.
Having a little experience with Puppet (and given the recommandation of AWS on the subject) I dove in the Puppet Master thing which is a great tool.
My idea, which may not be viable or realistic, is to create a "Puppet Node Stack" using CloudFormation templates, which will configure the instance as required and connect the puppet master to be provisioned.
Once I've a stack ready, I'm wondering how to configure/create Auto-Scaling group for both Varnish and Apache instances.
It seems that CFN has resources to configure the auto-scaling groups & policies, so I guess I could create two different templates for each.
But would the AS feature run trough the CFN service, and then does all the init things (and executes the user-data)?
I also read here and there that Puppet can make use of the EC2 Tags, maybe a generic stack template with corresponding tags (like roles) could do the trick?
Is this architecture realistic and viable? Do you have any feedbacks?
Thanks for your advices.
Auto-scaling creates new nodes based on the launch configuration. So you would have two separate auto scaling groups and two separate launch configurations. ie
"VarnishScalingGroup" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"LaunchConfigurationName" : {"Ref" : "VarnishLaunchConfiguration" },
"LoadBalancerNames" : {"Ref" : "ELB"},
...
}
},
"VarnishLaunchConfiguration" : {
"Type" : "AWS::AutoScaling::LaunchConfiguration",
"Properties" : {
...
"UserData" : {
....
},
"MetaData" : {
...
}
},
"ApacheScalingGroup" : {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties" : {
"LaunchConfigurationName" : {"Ref" : "ApacheLaunchConfiguration" },
"LoadBalancerNames" : {"Ref" : "InternalELB"},
...
}
},
"ApacheLaunchConfiguration" : {
"Type" : "AWS::AutoScaling::LaunchConfiguration",
"Properties" : {
...
"UserData" : {
....
},
"MetaData" : {
...
}
}
The other thing you'd want to add is separate scaling policies for each scaling group, and appropriate CloudWatch metrics to match.
CloudFormation can also initiate updates to the stack. If as part of the userdata you kick of cfn-hup, then it will periodically (you decide) check for changes in the stack meta data - and then execute whatever you prefer. I tend to kick off another version of cfn-init - which will parse and update any meta data.
Key point - if you go down the cfn-hup path, it will not execute userdata again, unless the CloudFormation stack requires dropping and creating new instances.
One other point, if you want updates to the LaunchConfiguration to be rolled out, you need to ensure that the LaunchConfiguration also has an UpdatePolicy applied to it.
Instead of having a "Puppet Node Stack" you might want to consider pre-building your AMIs using a tool like packer (http://www.packer.io/), which can provision a machine with puppet and create an AMI. Then add the provisioned AMI to your cloudformation template.
As Peter H. says, cloudformation can handle updates to your stack. So when you make changes to your puppet setup, you can build a new AMI and update your launch configuration in cloudformation. The autoscaling will start using the new AMI for autoscaling new instances.
Taking puppet out of cloudformation gives you a separation of concerns between infrastructure and server config.
Scaling up will happen faster with pre-built AMIs that already have your Apache/Varnish setup.
There are also advantages to a masterless puppet setup. ie. Decentralized, not having puppetmaster as a point of failure, etc. See https://serverfault.com/questions/408261/pros-and-cons-of-a-decentralized-puppet-architecture