I am trying to test my lambda function locally using sam local invoke. Error says UnknownEndpoint: Inaccessible host: secretsmanager.us-east-1.amazonaws.com' at port undefined'
This error is being thrown from inside my lambda function code as that is were I am pull secrets from. I have tried using --region --profile options as well but no luck.
For context, I am using terraform to design and deploy my infrastructure. Using SAML Authorization with Credentials file for AWS Access to our VPC environment. I have verified the region is being set correctly when SAM spins up the Lambda docker container. I have also verified that I am providing the same parameters for Lambda to identify secrets manager as the one running in the VPC version.
This only thing that I see odd is the port being undefined in console that seems that it is coming internally from the AWS SDK. Note that when I used the secrets manager terraform module that has been created by our company's cloud engineering team, I don't have to provide any port information. Hope someone can help explain this issue error.
USACCMNBSTEMD6R:balance-inquiry czl74b$ sam local invoke -t ./sam-local/template.yaml -e ./sam-local/event.json --debug
2022-01-06 17:23:29,736 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-01-06 17:23:29,736 | Using config file: samconfig.toml, config environment: default
2022-01-06 17:23:29,736 | Expand command line arguments to:
2022-01-06 17:23:29,736 | --template_file=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml --event=./sam-local/event.json --no_event --layer_cache_basedir=/Users/czl74b/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1
2022-01-06 17:23:29,736 | local invoke command is called
2022-01-06 17:23:29,743 | No Parameters detected in the template
2022-01-06 17:23:29,761 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,761 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,761 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,761 | 3 stacks found in the template
2022-01-06 17:23:29,762 | No Parameters detected in the template
2022-01-06 17:23:29,774 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,774 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,774 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,775 | 3 resources found in the stack
2022-01-06 17:23:29,775 | No Parameters detected in the template
2022-01-06 17:23:29,790 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,790 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,790 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,790 | No Parameters detected in the template
2022-01-06 17:23:29,802 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,802 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,803 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,803 | --base-dir is not presented, adjusting uri ../../../../common-utils relative to /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml
2022-01-06 17:23:29,803 | No Parameters detected in the template
2022-01-06 17:23:29,815 | There is no customer defined id or cdk path defined for resource BalanceInquiry, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,815 | There is no customer defined id or cdk path defined for resource CommonUtils, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,815 | There is no customer defined id or cdk path defined for resource NpmLibs, so we will use the resource logical id as the resource id
2022-01-06 17:23:29,815 | --base-dir is not presented, adjusting uri ../../../../npm-libs relative to /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml
2022-01-06 17:23:29,815 | Found Serverless function with name='BalanceInquiry' and CodeUri='../'
2022-01-06 17:23:29,816 | --base-dir is not presented, adjusting uri ../ relative to /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local/template.yaml
2022-01-06 17:23:29,840 | Found one Lambda function with name 'BalanceInquiry'
2022-01-06 17:23:29,840 | Invoking main.handler (nodejs14.x)
2022-01-06 17:23:29,840 | Environment variables overrides data is standard format
2022-01-06 17:23:29,840 | Loading AWS credentials from session with profile 'None'
2022-01-06 17:23:29,850 | Resolving code path. Cwd=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local, CodeUri=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry
2022-01-06 17:23:29,850 | Resolved absolute path to code is /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry
2022-01-06 17:23:29,850 | Code /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry is not a zip/jar file
2022-01-06 17:23:29,850 | Code /Users/czl74b/dev-js/lending-api-innovation/src/common-utils is not a zip/jar file
2022-01-06 17:23:29,850 | Code /Users/czl74b/dev-js/lending-api-innovation/src/npm-libs is not a zip/jar file
2022-01-06 17:23:29,850 | CommonUtils is a local Layer in the template
2022-01-06 17:23:29,850 | Resolving code path. Cwd=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local, CodeUri=/Users/czl74b/dev-js/lending-api-innovation/src/common-utils
2022-01-06 17:23:29,850 | NpmLibs is a local Layer in the template
2022-01-06 17:23:29,850 | Resolving code path. Cwd=/Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry/sam-local, CodeUri=/Users/czl74b/dev-js/lending-api-innovation/src/npm-libs
2022-01-06 17:23:29,851 | arn:aws:lambda:us-east-1:027255383542:layer:AWS-AppConfig-Extension:55 is already cached. Skipping download
Building image................................
2022-01-06 17:23:41,146 | Skip pulling image and use local one: samcli/lambda:nodejs14.x-x86_64-d5b52b0afc3579e405e95c7df.
2022-01-06 17:23:41,146 | Mounting /Users/czl74b/dev-js/lending-api-innovation/src/apis/sor/balance-inquiry as /var/task:ro,delegated inside runtime container
2022-01-06 17:23:41,598 | Starting a timer for 3 seconds for function 'BalanceInquiry'
START RequestId: 3b9f7abb-02d1-46e8-8b6b-321f9e5467ed Version: $LATEST
2022-01-07T00:23:43.539Z 3b9f7abb-02d1-46e8-8b6b-321f9e5467ed INFO getSecrets :: getSecretValue Error: UnknownEndpoint: Inaccessible host: `secretsmanager.us-east-1.amazonaws.com' at port 'undefined'. This service may not be available in the `us-east-1' region.
SAM local invoke runs the lambda function as a docker container. If behind corporate proxy, AWS SDK from this lambda needs proxy setup to communicate with the actual AWS Services. I was able to resolve by using the proxy-agent npm module. You can read about it here.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/node-configuring-proxies.html
Here is how this looked like in the code.
const AWS = require('aws-sdk');
const { HTTP_PROXY, LOCAL } = process.env;
if(LOCAL === 'TRUE'){
// lazy load proxy-agent only in LOCAL for sam local testing
const proxy = require('proxy-agent');
AWS.config.update({ httpOptions: { agent: proxy(HTTP_PROXY>) }});
}
Related
I'm pretty new to Terraform, my apologies if this question has an obvious answer I'm missing.
I am trying to create a terraform configuration file for an existing organization. I am able to provision everything I have in the main.tf outlined bellow except for the Shared folder that already exists within this organization.
Related github issues :
The folder operation violates display name uniqueness within the parent.
Generic error message when folder rename matches existing folder
Here are the steps I followed:
Manually create a Shared folder within the organization administration UI.
Manually create a Terraform admin project <redacted-project-name> at the root of the Shared folder.
Manually create a service account named terraform#<redacted-project-name> from the terraform admin project
Create, download and securely store a key for the terraform#<redacted-project-name> service account.
Enable APIs : cloudresourcemanager.googleapis.com, cloudbilling.googleapis.com, iam.googleapis.com, serviceusage.googleapis.com within the terraform admin project
Set permissions of the service account to role/owner, roles/resourcemanager.organizationAdmin, roles/resourcemanager.folderAdmin and roles/resourcemanager.projectCreator.
Create the main.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.85.0"
}
}
}
provider "google" {
credentials = file(var.credentials_file)
region = var.region
zone = var.zone
}
data "google_organization" "org" {
organization = var.organization.id
}
resource "google_folder" "shared" {
display_name = "Shared"
parent = data.google_organization.org.name
}
resource "google_folder" "ddm" {
display_name = "Data and Digital Marketing"
parent = data.google_organization.org.name
}
resource "google_folder" "dtl" {
display_name = "DTL"
parent = google_folder.ddm.name
}
The error I receive :
Error: Error creating folder 'Shared' in 'organizations/<redacted-org-id>': Error waiting for creating folder: Error code 9, message: Folder reservation failed for parent [organizations/<redacted-org-id>], folder [] due to constraint: The folder operation violates display name uniqueness within the parent.
How do I include existing resources within the terraform config file?
For (organization) folders (such as the example above)
For the billing account
For projects, i.e. Am I supposed to declare or import the terraform admin project within the main.tf?
For service accounts, how to handle existing keys and permissions of the account that is running the terraform apply
For existing policies and enabling APIs
In order to include already-existing resources within the terraform template, use the import statement.
For Folders
In the Terraform documentation for google_folder :
# Both syntaxes are valid
$ terraform import google_folder.department1 1234567
$ terraform import google_folder.department1 folders/1234567
So for the example above,
Fetch the folder id using gcloud alpha resource-manager folders list --organization=<redacted_org_id> providing the organization id.
Save the folder id somewhere, and if not already done, declare the folder as a resource within the main.tf
resource "google_folder" "shared" {
display_name = "Shared"
parent = data.google_organization.org.name
}
Run the command : terraform import google_folder.shared folders/<redacted_folder_id>. You should get an output like google_folder.shared: Import prepared!
Make sure your infrastructure is updated via terraform plan.
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
I am setting up a basic provisioning script for multiple services in aws. I want to keep each section separate. So currently I have one directory in my repo for setting up the vpc and networking aspects in a folder called vpc
I was wondering if there is a way to use one provider.tf file in the root of the repo. That way when I eventually create another resource group I want to create ( for example an EC2 instance and its security groups + EBS etc ) I can create another directory in my repo and source one provider.tf file.
So my directory structure would look like
/myrepo/
|
- provider.tf
|
- vpc/
|
- vpc.tf
|
- EC2/
|
- ec2.tf
How to use AWS services like CloudTrail or CloudWatch to check which user performed event DeleteObject?
I can use S3 Event to send a Delete event to SNS to notify an email address that a specific file has been deleted from the S3 bucket but the message does not contain the username that did it.
I can use CloudTrail to log all events related to an S3 bucket to another bucket, but I tested and it logs many details, and only event PutObject but not DeleteObject.
Is there any easy way to monitor an S3 bucket to find out which user deleted which file?
Upate 19 Aug
Following Walt's answer below, I was able to log the DeleteObject event. However, I can only get the file name (requestParameters.key
) for PutObject, but not for DeleteObjects.
| # | #timestamp | userIdentity.arn | eventName | requestParameters.key |
| - | ---------- | ---------------- | --------- | --------------------- |
| 1 | 2019-08-19T09:21:09.041-04:00 | arn:aws:iam::ID:user/me | DeleteObjects |
| 2 | 2019-08-19T09:18:35.704-04:00 | arn:aws:iam::ID:user/me | PutObject |test.txt |
It looks like other people have had the same issue and AWS is working on it: https://forums.aws.amazon.com/thread.jspa?messageID=799831
Here is my setup.
Detail instructions on setting up CloudTrail in the console. When setting up the CloudTrail double check these 2 options.
That your are logging S3 writes. You can do this for all S3 buckets or just the one you are interested. You also don't need to enable read logging to answer this question.
And you are sending events to CloudWatch Logs
If you made changes to the S3 write logging you might have to wait a little while. If you haven't had breakfast, lunch, snack, or dinner now would be a good time.
If you're using the same default CloudWatch log group as I have above this link to CloudWatch Insight Logs search should work for you.
This is a query that will show you all S3 DeleteObject calls. If the link doesn't work
Got to CloudWatch Console.
Select Logs->Insights on the left hand side.
Enter value for "Select log group(s)" that you specific above.
Enter this in the query field.
fields #timestamp, userIdentity.arn, eventName, requestParameters.bucketName, requestParameters.key
| filter eventSource == "s3.amazonaws.com"
| filter eventName == "DeleteObject"
| sort #timestamp desc
| limit 20
If you have any CloudTrail S3 Delete Object calls in the last 30 min the last 20 events will be shown.
As of 2021/04/12, CloudTrail does not record object key(s) or path for DeleteObjects calls.
If you delete an object with S3 console, it always calls DeleteObjects.
If you want to access object keys for deletion you will need to delete individual files with DeleteObject (minus s). This can be done with AWS CLI (aws s3 rm s3://some-bucket/single-filename) or direct API calls.
I want to set AWS API Gateway Rest Api Stage logging settings (see screenshot below) via aws java sdk. Now I create the deployment via CreateDeploymentRequest, which doesn't expose any such configuration. The same can be said about CreateStageRequest. UpdateStageRequest seems to provide a generic way for updating stage configuration, but does this allow setting access log / error log settings and if so what paths do I use to set these?
UPD:
After having read aws cli help (aws apigateway update-stage help, see below) I see I can use the following paths to update CloudWatch Settings:
/*/*/logging/loglevel for Log Level
/*/*/logging/dataTrace to enable logging full request/response data
/*/*/metrics/enabled to enable "Detailed CloudWatch Metrics"
How to go about enabling custom access logs?
UPDATE-STAGE() UPDATE-STAGE()
NAME
update-stage -
DESCRIPTION
Changes information about a Stage resource.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
SYNOPSIS
update-stage
--rest-api-id <value>
--stage-name <value>
[--patch-operations <value>]
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
OPTIONS
--rest-api-id (string)
[Required] The string identifier of the associated RestApi .
--stage-name (string)
[Required] The name of the Stage resource to change information
about.
--patch-operations (list)
A list of update operations to be applied to the specified resource
and in the order specified in this list.
Shorthand Syntax:
op=string,path=string,value=string,from=string ...
JSON Syntax:
[
{
"op": "add"|"remove"|"replace"|"move"|"copy"|"test",
"path": "string",
"value": "string",
"from": "string"
}
...
]
--cli-input-json (string) Performs service operation based on the JSON
string provided. The JSON string follows the format provided by --gen-
erate-cli-skeleton. If other arguments are provided on the command
line, the CLI values will override the JSON-provided values. It is not
possible to pass arbitrary binary values using a JSON-provided value as
the string will be taken literally.
--generate-cli-skeleton (string) Prints a JSON skeleton to standard
output without sending an API request. If provided with no value or the
value input, prints a sample input JSON that can be used as an argument
for --cli-input-json. If provided with the value output, it validates
the command inputs and returns a sample output JSON for that command.
See 'aws help' for descriptions of global parameters.
EXAMPLES
To override the stage settings and disable full request/response log-
ging for a specific resource and method in an API's stage
Command:
aws apigateway update-stage --rest-api-id 1234123412 --stage-name 'dev' --patch-operations op=replace,path=/~1resourceName/GET/logging/dataTrace,value=false
To enable full request/response logging for all resources/methods in an
API's stage
Command:
aws apigateway update-stage --rest-api-id 1234123412 --stage-name 'dev' --patch-operations op=replace,path=/*/*/logging/dataTrace,value=true
OUTPUT
deploymentId -> (string)
The identifier of the Deployment that the stage points to.
clientCertificateId -> (string)
The identifier of a client certificate for an API stage.
stageName -> (string)
The name of the stage is the first path segment in the Uniform
Resource Identifier (URI) of a call to API Gateway.
description -> (string)
The stage's description.
cacheClusterEnabled -> (boolean)
Specifies whether a cache cluster is enabled for the stage.
cacheClusterSize -> (string)
The size of the cache cluster for the stage, if enabled.
cacheClusterStatus -> (string)
The status of the cache cluster for the stage, if enabled.
methodSettings -> (map)
A map that defines the method settings for a Stage resource. Keys
(designated as /{method_setting_key below) are method paths defined
as {resource_path}/{http_method} for an individual method override,
or /\*/\* for overriding all methods in the stage.
key -> (string)
value -> (structure)
Specifies the method setting properties.
metricsEnabled -> (boolean)
Specifies whether Amazon CloudWatch metrics are enabled for
this method. The PATCH path for this setting is /{method_set-
ting_key}/metrics/enabled , and the value is a Boolean.
loggingLevel -> (string)
Specifies the logging level for this method, which affects
the log entries pushed to Amazon CloudWatch Logs. The PATCH
path for this setting is /{method_setting_key}/log-
ging/loglevel , and the available levels are OFF , ERROR ,
and INFO .
dataTraceEnabled -> (boolean)
Specifies whether data trace logging is enabled for this
method, which affects the log entries pushed to Amazon Cloud-
Watch Logs. The PATCH path for this setting is /{method_set-
ting_key}/logging/dataTrace , and the value is a Boolean.
throttlingBurstLimit -> (integer)
Specifies the throttling burst limit. The PATCH path for this
setting is /{method_setting_key}/throttling/burstLimit , and
the value is an integer.
throttlingRateLimit -> (double)
Specifies the throttling rate limit. The PATCH path for this
setting is /{method_setting_key}/throttling/rateLimit , and
the value is a double.
cachingEnabled -> (boolean)
Specifies whether responses should be cached and returned for
requests. A cache cluster must be enabled on the stage for
responses to be cached. The PATCH path for this setting is
/{method_setting_key}/caching/enabled , and the value is a
Boolean.
cacheTtlInSeconds -> (integer)
Specifies the time to live (TTL), in seconds, for cached
responses. The higher the TTL, the longer the response will
be cached. The PATCH path for this setting is /{method_set-
ting_key}/caching/ttlInSeconds , and the value is an integer.
cacheDataEncrypted -> (boolean)
Specifies whether the cached responses are encrypted. The
PATCH path for this setting is /{method_set-
ting_key}/caching/dataEncrypted , and the value is a Boolean.
requireAuthorizationForCacheControl -> (boolean)
Specifies whether authorization is required for a cache
invalidation request. The PATCH path for this setting is
/{method_setting_key}/caching/requireAuthorizationFor-
CacheControl , and the value is a Boolean.
unauthorizedCacheControlHeaderStrategy -> (string)
Specifies how to handle unauthorized requests for cache
invalidation. The PATCH path for this setting is
/{method_setting_key}/caching/unauthorizedCacheControlHeader-
Strategy , and the available values are FAIL_WITH_403 , SUC-
CEED_WITH_RESPONSE_HEADER , SUCCEED_WITHOUT_RESPONSE_HEADER .
variables -> (map)
A map that defines the stage variables for a Stage resource. Vari-
able names can have alphanumeric and underscore characters, and the
values must match [A-Za-z0-9-._~:/?#=,]+ .
key -> (string)
value -> (string)
documentationVersion -> (string)
The version of the associated API documentation.
accessLogSettings -> (structure)
Settings for logging access in this stage.
format -> (string)
A single line format of the access logs of data, as specified by
selected $context variables . The format must include at least
$context.requestId .
destinationArn -> (string)
The ARN of the CloudWatch Logs log group to receive access logs.
canarySettings -> (structure)
Settings for the canary deployment in this stage.
percentTraffic -> (double)
The percent (0-100) of traffic diverted to a canary deployment.
deploymentId -> (string)
The ID of the canary deployment.
stageVariableOverrides -> (map)
Stage variables overridden for a canary release deployment,
including new stage variables introduced in the canary. These
stage variables are represented as a string-to-string map
between stage variable names and their values.
key -> (string)
value -> (string)
useStageCache -> (boolean)
A Boolean flag to indicate whether the canary deployment uses
the stage cache or not.
tracingEnabled -> (boolean)
Specifies whether active tracing with X-ray is enabled for the
Stage .
webAclArn -> (string)
The ARN of the WebAcl associated with the Stage .
tags -> (map)
The collection of tags. Each tag element is associated with a given
resource.
key -> (string)
value -> (string)
createdDate -> (timestamp)
The timestamp when the stage was created.
lastUpdatedDate -> (timestamp)
The timestamp when the stage last updated.
UPDATE-STAGE()
You can configure custom access logging by setting in update stage
accessLogSettings -> (structure)
Settings for logging access in this stage.
format -> (string)
A single line format of the access logs of data, as specified by
selected $context variables . The format must include at least
$context.requestId .
destinationArn -> (string)
The ARN of the CloudWatch Logs log group to receive access logs.
An AWS SQS queue URL looks like this:
sqs.us-east-1.amazonaws.com/1234567890/default_development
And here are the parts broken down:
Always same | Stored in env var | Always same | ? | Stored in env var
sqs | us-east-1 | amazonaws.com | 1234567890 | default_development
So I can reconstruct the queue URL based on things I know except the 1234567890 part.
What is this number and is there a way, if I have my AWS creds in env vars, to get my hands on it without hard-coding another env var?
The 1234567890 should be your AWS account number.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/ImportantIdentifiers.html
If you don't have access to the queue URL directly (e.g. you can get it directly from CloudFormation if you create it there) you can call the GetQueueUrl API. It takes a parameters of the QueueName and optional QueueOwnerAWSAccountId. That would be the preferred method of getting the URL. It is true that the URL is a well formed URL based on the account and region, and I wouldn't expect that to change at this point. It is possible that it would be different in a region like the China regions, or the Gov Cloud regions.