Access Denied when trying to PutObject to s3 - amazon-web-services

I'm using the Serverless Framework to create a lambda that saves a CSV to a S3 bucket.
I already have a similar lambda that does this with another bucket.
This is where it gets weird: I can upload the CSV to the first S3 bucket I created (many months back), but I'm getting an AccessDenied error when uploading the same CSV to the new S3 bucket which was, as far as I can tell, created in the exact same way as the first via the serverless.yml config.
The error is:
Error: AccessDenied: Access Denied
These are the relevant bits from the serverless.yml:
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
region: eu-west-1
environment:
BUCKET_NEW: ${self:custom.bucketNew}
BUCKET: ${self:custom.bucket}
iam:
role:
statements:
- Effect: 'Allow'
Action: 'lambda:InvokeFunction'
Resource: '*'
- Effect: 'Allow'
Action:
- 's3:GetObject'
- 's3:PutObject'
Resource:
- 'arn:aws:s3:::*' # Added this whilst debugging
- 'arn:aws:s3:::*/*' # Added this whilst debugging
- 'arn:aws:s3:::${self:custom.bucket}'
- 'arn:aws:s3:::${self:custom.bucket}/*'
- 'arn:aws:s3:::${self:custom.bucketNew}'
- 'arn:aws:s3:::${self:custom.bucketNew}/*'
functions:
uploadReport:
handler: services/uploadReport.handler
vpc:
securityGroupIds:
- 000001
subnetIds:
- subnet-00000A
- subnet-00000B
- subnet-00000C
resources:
Resources:
Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.bucket}
BucketNew:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.bucketNew}
custom:
stage: ${opt:stage, 'dev'}
bucket: ${self:service}-${self:custom.stage}-report
bucketNew: ${self:service}-${self:custom.stage}-report-new
Lambda code (simplified):
const fs = require('fs')
const AWS = require('aws-sdk')
const S3 = new AWS.S3({
httpOptions: {
connectTimeout: 1000,
},
})
const uploadToS3 = (params) => new Promise((resolve, reject) => {
S3.putObject(params, err => (err ? reject(err) : resolve(params.Key)))
})
module.exports.handler = async () => {
const fileName = `report-new.csv`
const filePath = `/tmp/${fileName}`
// Some code that creates a CSV file at filePath.
const bucketParams = {
Bucket: process.env.BUCKET_NEW, // Works for process.env.BUCKET, but not process.env.BUCKET_NEW.
Key: fileName,
Body: fs.readFileSync(filePath).toString('utf-8'),
}
try {
const s3Upload = await uploadToS3(bucketParams)
} catch (e) {
throw new Error(e) // Throws Error: AccessDenied: Access Denied.
}
}

Found the solution but it's my own mistake. My lambda was actually within a VPC. My original question (before the edit) did not show this.
Lambda in a VPC can't talk with S3 buckets unless the VPC has an Endpoint Gateway that enables it to talk with any specifically referenced buckets.
I had previously created an Endpoint Gateway that let it talk with the initial bucket I created a while back, but forgot to update the Endpoint Gateway to let it talk to the new bucket.
Leaving this answer here unless anyone else spends an entire day trying to fix something silly.

I think you are likely missing some permissions I often use "s3:Put*" on my serverless applications which may not be advisable since it is so broad.
Here is a minimum list of permissions required to upload an object I found here What minimum permissions should I set to give S3 file upload access?
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:GetBucketLocation"

Related

How to create s3 buckets dynamically in azure devops CI/CD pipeline

I want to automate the process of bucket creation through CI/CD pipeline based on the data mentioned in one of the yaml file. So, I have got bucket.yaml file which contains the name of all the buckets. This file keeps changing as more buckets names will be added in future. Currently, this is how bucket.yaml looks
BucketName:
- test-bucket
- test-bucket2
- test-bucket3
I have got one template.yaml file which is a cloudformation template for s3 buckets creation. Here is how it looks:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
Properties:
BucketName: This will come from bucket.yaml
Now, template.yaml will fetch the bucket names from bucket.yaml file and should create 3 buckets as mentioned in bucket.yaml. If someone adds 2 more buckets in bucket.yaml, then template.yaml should create those 2 new buckets as well. Also, if someone deletes any bucket name from bucket.yaml then those buckets should be deleted as well. I couldn't find out the process in my research, just found information in bits and pieces.So, here I have specific questions, if its possible to do:
How to fetch bucket names from bucket.yaml and template.yaml should create all the buckets.
If someone update/add/delete bucket name in bucket.yaml, template.yaml should update those accordingly.
Also, please explain how will I do it through CI/CD pipeline in Azure DevOps.
About your first question:
How to fetch bucket names from bucket.yaml and template.yaml should create all the buckets.
In bucket.yaml you can use Parameters to set up the BucketName.
For example:
parameters:
- name: BucketName
type: object
default:
- test-bucket
- test-bucket2
- test-bucket3
steps:
- ${{ each value in parameters.BucketName }}:
- script: echo ${{ value }}
The step in here can loop through the values of the parameter BucketName.
In the template.yaml you can call the bucket.yaml like as below.
trigger:
- main
extends:
template: bucket.yaml
For your second question:
If someone update/add/delete bucket name in bucket.yaml, template.yaml should update those accordingly.
There is no any easy way to do this. You can try to write a script to run in the pipeline to do the following things:
List all the buckets that have been created. This is the list of the the existing buckets.
Compare the list of the existing buckets with the values list of the parameter BucketName to check which buckets need to be added and which need to be deleted.
If a bucket is listed in the parameter but not in the existing buckets, this bucket should be created as a new bucket.
If a bucket is listed in the existing buckets but not in the parameter, this bucket should be deleted.
BucketName:
- test-bucket
- test-bucket2
- test-bucket3
The requirements imply that all S3 buckets will be created in the same way and that no deviation from the given Cloudformation template (AWS::S3::Bucket) is required.
The requirements require us to track what S3 buckets need to be deleted. Cloudformation will not delete the S3 buckets as the Cloudformation template snippet contains a DeletionPolicy of Retain.
Solution:
The S3 buckets can be tagged in a specific way to identify them as being owned by the current CI/CD pipeline. S3 buckets can be listed and all the S3 buckets that are tagged in the correct way, and yet, does not exist in bucket.yaml can then be deleted.
I would personally just create S3 buckets required by the CI/CD pipeline using the AWS SDK and manually manage the S3 bucket deletion. If an application requires a S3 bucket then they should create it themselves in their application's Cloudformation stack so that they can !Ref it and customize it the way they want (eg encryption at rest, versioning, lifecycle rules, etc).
Technical note:
For a S3 bucket to be deleted its contents will also need to be deleted. This will require us to list all the objects in the S3 bucket and then delete them. Some documentation for the Java SDK [here].
Only subsequently will the API call to delete the S3 bucket succeed.
You can get Cloudformation to delete your S3 objects using a custom resource. That said, I don't find the custom resources that fun to work with - so if you can use the AWS SDK inside your CI/CD pipeline I would probably just use that.
The custom resource to delete a bucket's contents might look something like this in Cloudformation: (Its a custom resource that kicks of a Lambda. The Lambda will delete the S3 bucket contents if the custom resource gets deprovisioned)
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-custom-resources-lambda-lookup-amiids.html
ExampleBucketOperationCustomResource:
Type: AWS::CloudFormation::CustomResource
DependsOn: [Bucket, ExampleBucketOperationLambdaFunction]
Properties:
ServiceToken: !GetAtt ExampleBucketOperationLambdaFunction.Arn
# Custom properties
BucketToUse: !Ref S3BucketName
ExampleBucketOperationLambdaFunctionExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: "ExampleBucketOperationLambda-ExecutionRole"
Path: "/"
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- lambda.amazonaws.com
Policies:
- PolicyName: "ExampleBucketOperationLambda-CanAccessCloudwatchLogs"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- PolicyName: "ExampleBucketOperationLambda-S3BucketLevelPermissions"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}"
- PolicyName: "ExampleBucketOperationLambda-S3ObjectLevelPermissions"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:DeleteObject
- s3:PutObject
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}/*"
# Test payload:
# {"RequestType":"Create","ResourceProperties":{"BucketToUse":"your-bucket-name"}}
ExampleBucketOperationLambdaFunction:
Type: AWS::Lambda::Function
DependsOn: ExampleBucketOperationLambdaFunctionExecutionRole
# DeletionPolicy: Retain
Properties:
FunctionName: "ExampleBucketOperationLambda"
Role: !GetAtt ExampleBucketOperationLambdaFunctionExecutionRole.Arn
Runtime: python3.8
Handler: index.handler
Timeout: 30
Code:
ZipFile: |
import boto3
import cfnresponse
def handler(event, context):
eventType = event["RequestType"]
print("The event type is: " + str(eventType));
bucketToUse = event["ResourceProperties"]["BucketToUse"]
print("The bucket to use: " + str(bucketToUse));
try:
# Requires s3:ListBucket permission
if (eventType in ["Delete"]):
print("Deleting everyting in bucket: " + str(bucketToUse));
s3Client = boto3.client("s3")
s3Bucket = boto3.resource("s3").Bucket(bucketToUse)
for currFile in s3Bucket.objects.all():
print("Deleting file: " + currFile.key);
s3Client.delete_object(Bucket=bucketToUse, Key=currFile.key)
print("All done")
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData)
except Exception as e:
responseData = {}
errorDetail = "Exception: " + str(e)
errorDetail = errorDetail + "\n\t More detail can be found in CloudWatch Log Stream: " + context.log_stream_name
print(errorDetail)
cfnresponse.send(event=event, context=context, responseStatus=cfnresponse.FAILED, responseData=responseData, reason=errorDetail)
Thanks for the above answers. I took a different path to solve this issue. I used AWS CDK to implement what I exactly wanted. I personally used AWS CDK for Python and created infrastructure using that.

Is there no setting for AWS API Gateway REST API to disable execute-api endpoint in CloudFormation template?

I have setup an API Gateway (v1, not v2) REST API resource using CloudFormation template. Recently I have noticed that the default execute-api endpoint is also created, which I can disable in the settings.
The type of this API is AWS::ApiGateway::RestApi.
Naturally, I would like this to be done through the template, so the question is: can this setting be defined in the CloudFormation template, rather than havign to be clicked manually in the AWS Console? This option is available for the APIGateway V2 API resource (AWS::ApiGatewayV2::Api) but not the APIGateway V1 REST API resource (AWS::ApiGateway::RestApi) in the CloudFormation templates, even though it can be changed manuall for the APIGateway V1 REST API in the console.
There is also a CLI way of doing this for the AWS::ApiGateway::RestApi.
Here are some links I have used to search for this setting:
AWS::ApiGatewayV2::API
AWS::ApiGateway::RestApi
Disabling default api-execute endpoint via CLI
Support for disabling the default execute-api endpoint has recently been added to AWS::ApiGateway::RestApi cloudformation: DisableExecuteApiEndpoint
MyRestApi:
Type: 'AWS::ApiGateway::RestApi'
Properties:
DisableExecuteApiEndpoint: true
You can disable it though a simple custom resource. Below is an example of such a fully working template that does that:
Resources:
MyRestApi:
Type: 'AWS::ApiGateway::RestApi'
Properties:
Description: A test API
Name: MyRestAPI
LambdaBasicExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
MyCustomResource:
Type: Custom::DisableDefaultApiEndpoint
Properties:
ServiceToken: !GetAtt 'MyCustomFunction.Arn'
APIId: !Ref 'MyRestApi'
MyCustomFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.lambda_handler
Description: "Disable default API endpoint"
Timeout: 30
Role: !GetAtt 'LambdaBasicExecutionRole.Arn'
Runtime: python3.7
Code:
ZipFile: |
import json
import logging
import cfnresponse
import boto3
logger = logging.getLogger()
logger.setLevel(logging.INFO)
client = boto3.client('apigateway')
def lambda_handler(event, context):
logger.info('got event {}'.format(event))
try:
responseData = {}
if event['RequestType'] in ["Create"]:
APIId = event['ResourceProperties']['APIId']
response = client.update_rest_api(
restApiId=APIId,
patchOperations=[
{
'op': 'replace',
'path': '/disableExecuteApiEndpoint',
'value': 'True'
}
]
)
logger.info(str(response))
cfnresponse.send(event, context,
cfnresponse.SUCCESS, responseData)
else:
logger.info('Unexpected RequestType!')
cfnresponse.send(event, context,
cfnresponse.SUCCESS, responseData)
except Exception as err:
logger.error(err)
responseData = {"Data": str(err)}
cfnresponse.send(event,context,
cfnresponse.FAILED,responseData)
return
In case anyone stumbles across this answer that is using CDK, this can be done concisely (without defining a Lambda function) using the AwsCustomResource construct:
const restApi = new apigw.RestApi(...);
const executeApiResource = new cr.AwsCustomResource(this, "execute-api-resource", {
functionName: "disable-execute-api-endpoint",
onCreate: {
service: "APIGateway",
action: "updateRestApi",
parameters: {
restApiId: restApi.restApiId,
patchOperations: [{
op: "replace",
path: "/disableExecuteApiEndpoint",
value: "True"
}]
},
physicalResourceId: cr.PhysicalResourceId.of("execute-api-resource")
},
policy: cr.AwsCustomResourcePolicy.fromStatements([new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ["apigateway:PATCH"],
resources: ["arn:aws:apigateway:*::/*"],
})])
});
executeApiResource.node.addDependency(restApi);
You can disable it in AWS CDK. This is done by finding the CloudFormation resource and setting it to true.
const api = new apigateway.RestApi(this, 'api', );
(api.node.children[0] as apigateway.CfnRestApi).addPropertyOverride('DisableExecuteApiEndpoint','true')
Here is a Python variant of the answer provided by snorberhuis.
rest_api = apigateway.RestApi(self,...)
cfn_apigw = rest_api.node.default_child
cfn_apigw.add_property_override('DisableExecuteApiEndpoint', True)
Amazon's docs on "Abstractions and Escape Hatches" is very good for understanding what's going on here.

Cognito "PreSignUp invocation failed due to configuration" despite having invoke permissions well configured

I currently have a Cognito user pool configured to trigger a pre sign up lambda. Right now I am setting up the staging environment, and I have the exact same setup on dev (which works). I know it is the same because I am creating both envs out of the same terraform files.
I have already associated the invoke permissions with the lambda function, which is very often the cause for this error message. Everything looks the same in both environments, except that I get "PreSignUp invocation failed due to configuration" when I try to sign up a new user from my new staging environment.
I have tried to remove and re-associate the trigger manually, from the console, still, it doesn't work
I have compared every possible setting I can think of, including "App client" configs. They are really the same
I tried editing the lambda code in order to "force" it to update
Can it be AWS taking too long to invalidate the permissions cache? So far I can only believe this is a bug from AWS...
Any ideas!?
There appears to be a race condition with permissions not being attached on the first deployment.
I was able to reproduce this with cloudformation.
Deploying a stack with the same config twice appears to "fix" the permissions issue.
I actually added a 10-second delay on the permissions attachment and it solved my first deployment issue...
I hope this helps others who run into this issue. 😃
# Hack to fix Cloudformation bug
# AWS::Lambda::Permission will not attach correctly on first deployment unless "delay" is used
# DependsOn & every other thing did not work... ¯\_(ツ)_/¯
CustomResourceDelay:
Type: Custom::Delay
DependsOn:
- PostConfirmationLambdaFunction
- CustomMessageLambdaFunction
- CognitoUserPool
Properties:
ServiceToken: !GetAtt CustomResourceDelayFunction.Arn
SecondsToWait: 10
CustomResourceDelayFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement: [{ "Effect":"Allow","Principal":{"Service":["lambda.amazonaws.com"]},"Action":["sts:AssumeRole"] }]
Policies:
- PolicyName: !Sub "${AWS::StackName}-delay-lambda-logs"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: [ logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents ]
Resource: !Sub arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${AWS::StackName}*:*
CustomResourceDelayFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Description: Wait for N seconds custom resource for stack debounce
Timeout: 120
Role: !GetAtt CustomResourceDelayFunctionRole.Arn
Runtime: nodejs12.x
Code:
ZipFile: |
const { send, SUCCESS } = require('cfn-response')
exports.handler = (event, context, callback) => {
if (event.RequestType !== 'Create') {
return send(event, context, SUCCESS)
}
const timeout = (event.ResourceProperties.SecondsToWait || 10) * 1000
setTimeout(() => send(event, context, SUCCESS), timeout)
}
# ------------------------- Roles & Permissions for cognito resources ---------------------------
CognitoTriggerPostConfirmationInvokePermission:
Type: AWS::Lambda::Permission
## CustomResourceDelay needed to property attach permission
DependsOn: [ CustomResourceDelay ]
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt PostConfirmationLambdaFunction.Arn
Principal: cognito-idp.amazonaws.com
SourceArn: !GetAtt CognitoUserPool.Arn
In my situation the problem was caused by the execution permissions of the Lambda function: While there was a role configured, that role was empty due to some unrelated changes.
Making sure the role actually had permissions to do the logging and all the other things that the function was trying to do made things work again for me.

AWS Textract StartDocumentAnalysis function not publishing a message to the SNS Topic

I am working with AWS Textract and I want to analyze a multipage document, therefore I have to use the async options, so I first used startDocumentAnalysisfunction and I got a JobId as the return, But it needs to trigger a function that I have set to trigger when the SNS topic got a message.
These are my serverless file and handler file.
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::${self:custom.secrets.IMAGE_BUCKET_NAME}", "/*" ] ] }
- Effect: "Allow"
Action:
- "sts:AssumeRole"
- "SNS:Publish"
- "lambda:InvokeFunction"
- "textract:DetectDocumentText"
- "textract:AnalyzeDocument"
- "textract:StartDocumentAnalysis"
- "textract:GetDocumentAnalysis"
Resource: "*"
custom:
secrets: ${file(secrets.${opt:stage, self:provider.stage}.yml)}
functions:
routes:
handler: src/functions/routes/handler.run
events:
- s3:
bucket: ${self:custom.secrets.IMAGE_BUCKET_NAME}
event: s3:ObjectCreated:*
textract:
handler: src/functions/routes/handler.detectTextAnalysis
events:
- sns: "TextractTopic"
resources:
Resources:
TextractTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: "Start Textract API Response"
TopicName: TextractResponseTopic
Handler.js
module.exports.run = async (event) => {
const uploadedBucket = event.Records[0].s3.bucket.name;
const uploadedObjetct = event.Records[0].s3.object.key;
var params = {
DocumentLocation: {
S3Object: {
Bucket: uploadedBucket,
Name: uploadedObjetct
}
},
FeatureTypes: [
"TABLES",
"FORMS"
],
NotificationChannel: {
RoleArn: 'arn:aws:iam::<accont-id>:role/qvalia-ocr-solution-dev-us-east-1-lambdaRole',
SNSTopicArn: 'arn:aws:sns:us-east-1:<accont-id>:TextractTopic'
}
};
let textractOutput = await new Promise((resolve, reject) => {
textract.startDocumentAnalysis(params, function(err, data) {
if (err) reject(err);
else resolve(data);
});
});
}
I manually published an sns message to the topic and then it is firing the textract lambda, which currently has this,
module.exports.detectTextAnalysis = async (event) => {
console.log('SNS Topic isssss Generated');
console.log(event.Records[0].Sns.Message);
};
What is the mistake that I have and why the textract startDocumentAnalysis is not publishing a message and making it trigger the lambda?
Note: I haven't use the startDocumentTextDetection before using the startTextAnalysis function, though it is not necessary to call it before this.
Make sure you have in your Trusted Relationships of the role you are using:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"textract.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The SNS Topic name must be AmazonTextract
At the end your arn should look this:
arn:aws:sns:us-east-2:111111111111:AmazonTextract
I was able got this working directly via Serverless Framework by adding a Lambda execution resource to my serverless.yml file:
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- textract.amazonaws.com
Action: sts:AssumeRole
And then I just used the same role generated by Serverless (for the lambda function) as the notification channel role parameter when starting the Textract document analysis:
Thanks to this this post for pointing me in the right direction!
For anyone using the CDK in TypeScript, you will need to add Lambda as a ServicePrincipal as usual to the Lambda Execution Role. Next, access the assumeRolePolicy of the execution role and call the addStatements method.
The basic execution role without any additional statement (add those later)
this.executionRole = new iam.Role(this, 'ExecutionRole', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
Next, add Textract as an additional ServicePrincipal
this.executionRole.assumeRolePolicy?.addStatements(
new PolicyStatement({
principals: [
new ServicePrincipal('textract.amazonaws.com'),
],
actions: ['sts:AssumeRole']
})
);
Also, ensure the execution role has full permissions on the target SNS topic (note the topic is created already and accessed via fromTopicArn method)
const stmtSNSOps = new PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
"SNS:*"
],
resources: [
this.textractJobStatusTopic.topicArn
]
});
Add the policy statement to a global policy (within the active stack)
this.standardPolicy = new iam.Policy(this, 'Policy', {
statements: [
...
stmtSNSOps,
...
]
});
Finally, attach the policy to the execution role
this.executionRole.attachInlinePolicy(this.standardPolicy);
If you have your bucket encrypted you should grant kms permissions, otherwise it won't work

AWS + Serverless - how to get at the secret key generated by cognito user pool

I've been following the serverless tutorial at https://serverless-stack.com/chapters/configure-cognito-user-pool-in-serverless.html
I've got the following serverless yaml snippit
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:custom.stage}-moochless-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
# Generate an app client name based on the stage
ClientName: ${self:custom.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
# >>>>> HOW DO I GET THIS VALUE IN OUTPUT <<<<<
GenerateSecret: true
# Print out the Id of the User Pool that is created
Outputs:
UserPoolId:
Value:
Ref: CognitoUserPool
UserPoolClientId:
Value:
Ref: CognitoUserPoolClient
#UserPoolSecret:
# WHAT GOES HERE?
I'm exporting all my other config variables to a json file (to be consumed by a mobile app, so I need the secret key).
How do I get the secret key generated to appear in my output list?
The ideal way to retrieve the secret key is to use "CognitoUserPoolClient.ClientSecret" in your cloudformation template.
UserPoolClientIdSecret:
Value:
!GetAtt CognitoUserPoolClient.ClientSecret
But it is not supported as explained here and gives message as shown in the image:
You can run below CLI command to retrieve the secret key as a work around:
aws cognito-idp describe-user-pool-client --user-pool-id "us-west-XXXXXX" --region us-west-2 --client-id "XXXXXXXXXXXXX" --query 'UserPoolClient.ClientSecret' --output text
As Prabhakar Reddy points out, currently you can't get the Cognito client secret using !GetAtt in your CloudFormation template. However, there is a way to avoid the manual step of using the AWS command line to get the secret. The AWS Command Runner utility for CloudFormation allows you to run AWS CLI commands from your CloudFormation templates, so you can run the CLI command to get the secret in the CloudFormation template and then use the output of the command elsewhere in your template using !GetAtt. Basically CommandRunner spins up an EC2 instance and runs the command you specify and saves the output of the command to a file on the instance while the CloudFormation template is running so that it can be retrieved later using !GetAtt. Note that CommandRunner is a special custom CloudFormation type that needs to be installed for the AWS account as a separate step. Below is an example CloudFormation template that will get a Cognito client secret and save it to AWS Secrets manager.
Resources:
CommandRunnerRole:
Type: AWS::IAM::Role
Properties:
# the AssumeRolePolicyDocument specifies which services can assume this role, for CommandRunner this needs to be ec2
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: CommandRunnerPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'logs:CancelUploadArchive'
- 'logs:GetBranch'
- 'logs:GetCommit'
- 'cognito-idp:*'
Resource: '*'
CommandRunnerInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref CommandRunnerRole
GetCognitoClientSecretCommand:
Type: AWSUtility::CloudFormation::CommandRunner
Properties:
Command: aws cognito-idp describe-user-pool-client --user-pool-id <user_pool_id> --region us-east-2 --client-id <client_id> --query UserPoolClient.ClientSecret --output text > /command-output.txt
Role: !Ref CommandRunnerInstanceProfile
InstanceType: "t2.nano"
LogGroup: command-runner-logs
CognitoClientSecret:
Type: AWS::SecretsManager::Secret
DependsOn: GetCognitoClientSecretCommand
Properties:
Name: "command-runner-secret"
SecretString: !GetAtt GetCognitoClientSecretCommand.Output
Note that you will need to replace the <user_pool_id> and <client_id> with your user pool and client pool id. A complete CloudFormation template would likely create the Cognito User Pool and User Pool Client and the user pool & client id values could be retrieved from those resources using !Ref as part of a !Join statement that creates the entire command, e.g.
Command: !Join [' ', ['aws cognito-idp describe-user-pool-client --user-pool-id', !Ref CognitoUserPool, '--region', !Ref AWS::Region, '--client-id', !Ref CognitoUserPoolClient, '--query UserPoolClient.ClientSecret --output text > /command-output.txt']]
One final note, depending on your operating system, the installation/registration of CommandRunner may fail trying to create the S3 bucket it needs. This is because it tries to generate a bucket name using uuidgen and will fail if uuidgen isn't installed. I have opened an issue on the CommandRunner GitHub repo for this. Until the issue is resolved, you can get around this by modifying the /scripts/register.sh script to use a static bucket name.
As it is still not possible to get the secret of a Cognito User Pool Client using !GetAtt in a CloudFormation Template I was looking for an alternative solution without manual steps so the infrastructure can get deployed automatically.
I like clav's solution but it requires the Command Runner to be installed first.
So, what I did in the end was using a Lambda-backed custom resource. I wrote it in JavaScript but you can also write it in Python.
Here is an overview of the 3 steps you need to follow:
Create IAM Policy and add it to the Lambda function execution role.
Add creation of In-Line Lambda function to CloudFormation Template.
Add creation of Lambda-backed custom resource to CloudFormation Template.
Get the output from the custom Ressource via !GetAtt
And here are the details:
Create IAM Policy and add it to the Lambda function execution role.
# IAM: Policy to describe user pool clients of Cognito user pools
CognitoDescribeUserPoolClientsPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: 'Allows describing Cognito user pool clients.'
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'cognito-idp:DescribeUserPoolClient'
Resource:
- !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/*'
If necessary only allow it for certain resources.
Add creation of In-Line Lambda function to CloudFormation Template.
# Lambda: Function to get the secret of a Cognito User Pool Client
LambdaFunctionGetCognitoUserPoolClientSecret:
Type: AWS::Lambda::Function
Properties:
FunctionName: 'GetCognitoUserPoolClientSecret'
Description: 'Lambda function to get the secret of a Cognito User Pool Client.'
Handler: index.lambda_handler
Role: !Ref LambdaFunctionExecutionRoleArn
Runtime: nodejs14.x
Timeout: '30'
Code:
ZipFile: |
// Import required modules
const response = require('cfn-response');
const { CognitoIdentityServiceProvider } = require('aws-sdk');
// FUNCTION: Lambda Handler
exports.lambda_handler = function(event, context) {
console.log("Request received:\n" + JSON.stringify(event));
// Read data from input parameters
let userPoolId = event.ResourceProperties.UserPoolId;
let userPoolClientId = event.ResourceProperties.UserPoolClientId;
// Set physical ID
let physicalId = `${userPoolId}-${userPoolClientId}-secret`;
let errorMessage = `Error at getting secret from cognito user pool client:`;
try {
let requestType = event.RequestType;
if(requestType === 'Create') {
console.log(`Request is of type '${requestType}'. Get secret from cognito user pool client.`);
// Get secret from cognito user pool client
let cognitoIdp = new CognitoIdentityServiceProvider();
cognitoIdp.describeUserPoolClient({
UserPoolId: userPoolId,
ClientId: userPoolClientId
}).promise()
.then(result => {
let secret = result.UserPoolClient.ClientSecret;
response.send(event, context, response.SUCCESS, {Status: response.SUCCESS, Error: 'No Error', Secret: secret}, physicalId);
}).catch(error => {
// Error
console.log(`${errorMessage}:${error}`);
response.send(event, context, response.FAILED, {Status: response.FAILED, Error: error}, physicalId);
});
} else {
console.log(`Request is of type '${requestType}'. Not doing anything.`);
response.send(event, context, response.SUCCESS, {Status: response.SUCCESS, Error: 'No Error'}, physicalId);
}
} catch (error){
// Error
console.log(`${errorMessage}:${error}`);
response.send(event, context, response.FAILED, {Status: response.FAILED, Error: error}, physicalId);
}
};
Make sure you pass the right Lambda Execution Role to the parameter Role. It should contain the policy created in step 1.
Add creation of Lambda-backed custom resource to CloudFormation Template.
# Custom: Cognito user pool client secret
UserPoolClientSecret:
Type: Custom::UserPoolClientSecret
Properties:
ServiceToken: !Ref LambdaFunctionGetCognitoUserPoolClientSecret
UserPoolId: !Ref UserPool
UserPoolClientId: !Ref UserPoolClient
Make sure you pass the Lambda function created in step 2 as ServiceToken. Also make sure you pass in the right values for the parameters UserPoolId and UserPoolClientId. They should be taken from the Cognito User Pool and the Cognito User Pool Client.
Get the output from the custom Ressource via !GetAtt
!GetAtt UserPoolClientSecret.Secret
You can do this anywhere you want.