How to refer autogenerated Role to attach to new IAM policy - amazon-web-services

I am creating a custom policy to attach it to the IAM role which has been autogenerated by AWS.
Below is the policy:-
rRotationLambdaDecryptPolicy:
Type: AWS::IAM::ManagedPolicy
DependsOn: rSecretRotationScheduleHostedRotationLambda
Properties:
Description: "Providing access to HostedLambda for decrypting KMS"
ManagedPolicyName: CustomedHostedLambdaKmsUserRolePolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: AllowLambdaDecryptKMS
Effect: Allow
Action:
- kms:Decrypt
- kms:CreateGrant
Resource:
- !Sub arn:aws:kms:*:${AWS::AccountId}:key/*
Condition:
ForAnyValue:StringLike:
kms:ResourceAliases: alias/SecretsManager_KMSKey
Roles: <friendly rolename>
In Roles parameter , as i am not fully aware of the Rolename, so have been trying to generate it from its arn.
Roles:
- !Select [!Split ["/", !Sub 'arn:aws:iam::${AWS::AccountId}:role/secret-rotat-SecretsManagerRDSPostgre-*']]
but once pushed to cloudformation, getting error as below:-
The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,.#_- (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: fbd4b14e-8c0e-459f-867f-968052828620; Proxy: null)
Not sure what is wrong here, and how can i refer it!

You can't use a wildcard * in the role name.
You would need to save the role name somewhere you can retrieve it (e.g. in Systems Manager Parameter store), or pass it as a parameter in to the template and then !Ref it.

i looked out everywhere, and found that this will be part of next version release of SAM as it requires work to add more attributes to the HostedLambda.
Till the time i managed to retrieve it from jenkins using AWSCLI (by getting the Lambda attributes which has that autogenerated role attached) and then processing it.
aur_rolename=sh(script: """aws lambda get-function --function-name SecretsManager-research-creds-rotation-lambda --query Configuration.Role --output text""", returnStdout: true).trim().split('/')[1]
aur_policyarn="arn:aws:iam::${env.account}:policy/CustomedHostedLambdaKmsUserRolePolicy"
sh(script: """aws iam attach-role-policy --policy-arn ${aur_policyarn} --role-name ${aur_rolename}""", returnStdout: true)

Related

How to create s3 buckets dynamically in azure devops CI/CD pipeline

I want to automate the process of bucket creation through CI/CD pipeline based on the data mentioned in one of the yaml file. So, I have got bucket.yaml file which contains the name of all the buckets. This file keeps changing as more buckets names will be added in future. Currently, this is how bucket.yaml looks
BucketName:
- test-bucket
- test-bucket2
- test-bucket3
I have got one template.yaml file which is a cloudformation template for s3 buckets creation. Here is how it looks:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
Properties:
BucketName: This will come from bucket.yaml
Now, template.yaml will fetch the bucket names from bucket.yaml file and should create 3 buckets as mentioned in bucket.yaml. If someone adds 2 more buckets in bucket.yaml, then template.yaml should create those 2 new buckets as well. Also, if someone deletes any bucket name from bucket.yaml then those buckets should be deleted as well. I couldn't find out the process in my research, just found information in bits and pieces.So, here I have specific questions, if its possible to do:
How to fetch bucket names from bucket.yaml and template.yaml should create all the buckets.
If someone update/add/delete bucket name in bucket.yaml, template.yaml should update those accordingly.
Also, please explain how will I do it through CI/CD pipeline in Azure DevOps.
About your first question:
How to fetch bucket names from bucket.yaml and template.yaml should create all the buckets.
In bucket.yaml you can use Parameters to set up the BucketName.
For example:
parameters:
- name: BucketName
type: object
default:
- test-bucket
- test-bucket2
- test-bucket3
steps:
- ${{ each value in parameters.BucketName }}:
- script: echo ${{ value }}
The step in here can loop through the values of the parameter BucketName.
In the template.yaml you can call the bucket.yaml like as below.
trigger:
- main
extends:
template: bucket.yaml
For your second question:
If someone update/add/delete bucket name in bucket.yaml, template.yaml should update those accordingly.
There is no any easy way to do this. You can try to write a script to run in the pipeline to do the following things:
List all the buckets that have been created. This is the list of the the existing buckets.
Compare the list of the existing buckets with the values list of the parameter BucketName to check which buckets need to be added and which need to be deleted.
If a bucket is listed in the parameter but not in the existing buckets, this bucket should be created as a new bucket.
If a bucket is listed in the existing buckets but not in the parameter, this bucket should be deleted.
BucketName:
- test-bucket
- test-bucket2
- test-bucket3
The requirements imply that all S3 buckets will be created in the same way and that no deviation from the given Cloudformation template (AWS::S3::Bucket) is required.
The requirements require us to track what S3 buckets need to be deleted. Cloudformation will not delete the S3 buckets as the Cloudformation template snippet contains a DeletionPolicy of Retain.
Solution:
The S3 buckets can be tagged in a specific way to identify them as being owned by the current CI/CD pipeline. S3 buckets can be listed and all the S3 buckets that are tagged in the correct way, and yet, does not exist in bucket.yaml can then be deleted.
I would personally just create S3 buckets required by the CI/CD pipeline using the AWS SDK and manually manage the S3 bucket deletion. If an application requires a S3 bucket then they should create it themselves in their application's Cloudformation stack so that they can !Ref it and customize it the way they want (eg encryption at rest, versioning, lifecycle rules, etc).
Technical note:
For a S3 bucket to be deleted its contents will also need to be deleted. This will require us to list all the objects in the S3 bucket and then delete them. Some documentation for the Java SDK [here].
Only subsequently will the API call to delete the S3 bucket succeed.
You can get Cloudformation to delete your S3 objects using a custom resource. That said, I don't find the custom resources that fun to work with - so if you can use the AWS SDK inside your CI/CD pipeline I would probably just use that.
The custom resource to delete a bucket's contents might look something like this in Cloudformation: (Its a custom resource that kicks of a Lambda. The Lambda will delete the S3 bucket contents if the custom resource gets deprovisioned)
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-custom-resources-lambda-lookup-amiids.html
ExampleBucketOperationCustomResource:
Type: AWS::CloudFormation::CustomResource
DependsOn: [Bucket, ExampleBucketOperationLambdaFunction]
Properties:
ServiceToken: !GetAtt ExampleBucketOperationLambdaFunction.Arn
# Custom properties
BucketToUse: !Ref S3BucketName
ExampleBucketOperationLambdaFunctionExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: "ExampleBucketOperationLambda-ExecutionRole"
Path: "/"
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- lambda.amazonaws.com
Policies:
- PolicyName: "ExampleBucketOperationLambda-CanAccessCloudwatchLogs"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- PolicyName: "ExampleBucketOperationLambda-S3BucketLevelPermissions"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}"
- PolicyName: "ExampleBucketOperationLambda-S3ObjectLevelPermissions"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:DeleteObject
- s3:PutObject
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}/*"
# Test payload:
# {"RequestType":"Create","ResourceProperties":{"BucketToUse":"your-bucket-name"}}
ExampleBucketOperationLambdaFunction:
Type: AWS::Lambda::Function
DependsOn: ExampleBucketOperationLambdaFunctionExecutionRole
# DeletionPolicy: Retain
Properties:
FunctionName: "ExampleBucketOperationLambda"
Role: !GetAtt ExampleBucketOperationLambdaFunctionExecutionRole.Arn
Runtime: python3.8
Handler: index.handler
Timeout: 30
Code:
ZipFile: |
import boto3
import cfnresponse
def handler(event, context):
eventType = event["RequestType"]
print("The event type is: " + str(eventType));
bucketToUse = event["ResourceProperties"]["BucketToUse"]
print("The bucket to use: " + str(bucketToUse));
try:
# Requires s3:ListBucket permission
if (eventType in ["Delete"]):
print("Deleting everyting in bucket: " + str(bucketToUse));
s3Client = boto3.client("s3")
s3Bucket = boto3.resource("s3").Bucket(bucketToUse)
for currFile in s3Bucket.objects.all():
print("Deleting file: " + currFile.key);
s3Client.delete_object(Bucket=bucketToUse, Key=currFile.key)
print("All done")
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData)
except Exception as e:
responseData = {}
errorDetail = "Exception: " + str(e)
errorDetail = errorDetail + "\n\t More detail can be found in CloudWatch Log Stream: " + context.log_stream_name
print(errorDetail)
cfnresponse.send(event=event, context=context, responseStatus=cfnresponse.FAILED, responseData=responseData, reason=errorDetail)
Thanks for the above answers. I took a different path to solve this issue. I used AWS CDK to implement what I exactly wanted. I personally used AWS CDK for Python and created infrastructure using that.

Cognito "PreSignUp invocation failed due to configuration" despite having invoke permissions well configured

I currently have a Cognito user pool configured to trigger a pre sign up lambda. Right now I am setting up the staging environment, and I have the exact same setup on dev (which works). I know it is the same because I am creating both envs out of the same terraform files.
I have already associated the invoke permissions with the lambda function, which is very often the cause for this error message. Everything looks the same in both environments, except that I get "PreSignUp invocation failed due to configuration" when I try to sign up a new user from my new staging environment.
I have tried to remove and re-associate the trigger manually, from the console, still, it doesn't work
I have compared every possible setting I can think of, including "App client" configs. They are really the same
I tried editing the lambda code in order to "force" it to update
Can it be AWS taking too long to invalidate the permissions cache? So far I can only believe this is a bug from AWS...
Any ideas!?
There appears to be a race condition with permissions not being attached on the first deployment.
I was able to reproduce this with cloudformation.
Deploying a stack with the same config twice appears to "fix" the permissions issue.
I actually added a 10-second delay on the permissions attachment and it solved my first deployment issue...
I hope this helps others who run into this issue. 😃
# Hack to fix Cloudformation bug
# AWS::Lambda::Permission will not attach correctly on first deployment unless "delay" is used
# DependsOn & every other thing did not work... ¯\_(ツ)_/¯
CustomResourceDelay:
Type: Custom::Delay
DependsOn:
- PostConfirmationLambdaFunction
- CustomMessageLambdaFunction
- CognitoUserPool
Properties:
ServiceToken: !GetAtt CustomResourceDelayFunction.Arn
SecondsToWait: 10
CustomResourceDelayFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement: [{ "Effect":"Allow","Principal":{"Service":["lambda.amazonaws.com"]},"Action":["sts:AssumeRole"] }]
Policies:
- PolicyName: !Sub "${AWS::StackName}-delay-lambda-logs"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: [ logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents ]
Resource: !Sub arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${AWS::StackName}*:*
CustomResourceDelayFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Description: Wait for N seconds custom resource for stack debounce
Timeout: 120
Role: !GetAtt CustomResourceDelayFunctionRole.Arn
Runtime: nodejs12.x
Code:
ZipFile: |
const { send, SUCCESS } = require('cfn-response')
exports.handler = (event, context, callback) => {
if (event.RequestType !== 'Create') {
return send(event, context, SUCCESS)
}
const timeout = (event.ResourceProperties.SecondsToWait || 10) * 1000
setTimeout(() => send(event, context, SUCCESS), timeout)
}
# ------------------------- Roles & Permissions for cognito resources ---------------------------
CognitoTriggerPostConfirmationInvokePermission:
Type: AWS::Lambda::Permission
## CustomResourceDelay needed to property attach permission
DependsOn: [ CustomResourceDelay ]
Properties:
Action: lambda:InvokeFunction
FunctionName: !GetAtt PostConfirmationLambdaFunction.Arn
Principal: cognito-idp.amazonaws.com
SourceArn: !GetAtt CognitoUserPool.Arn
In my situation the problem was caused by the execution permissions of the Lambda function: While there was a role configured, that role was empty due to some unrelated changes.
Making sure the role actually had permissions to do the logging and all the other things that the function was trying to do made things work again for me.

AWS + Serverless - how to get at the secret key generated by cognito user pool

I've been following the serverless tutorial at https://serverless-stack.com/chapters/configure-cognito-user-pool-in-serverless.html
I've got the following serverless yaml snippit
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:custom.stage}-moochless-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
# Generate an app client name based on the stage
ClientName: ${self:custom.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
# >>>>> HOW DO I GET THIS VALUE IN OUTPUT <<<<<
GenerateSecret: true
# Print out the Id of the User Pool that is created
Outputs:
UserPoolId:
Value:
Ref: CognitoUserPool
UserPoolClientId:
Value:
Ref: CognitoUserPoolClient
#UserPoolSecret:
# WHAT GOES HERE?
I'm exporting all my other config variables to a json file (to be consumed by a mobile app, so I need the secret key).
How do I get the secret key generated to appear in my output list?
The ideal way to retrieve the secret key is to use "CognitoUserPoolClient.ClientSecret" in your cloudformation template.
UserPoolClientIdSecret:
Value:
!GetAtt CognitoUserPoolClient.ClientSecret
But it is not supported as explained here and gives message as shown in the image:
You can run below CLI command to retrieve the secret key as a work around:
aws cognito-idp describe-user-pool-client --user-pool-id "us-west-XXXXXX" --region us-west-2 --client-id "XXXXXXXXXXXXX" --query 'UserPoolClient.ClientSecret' --output text
As Prabhakar Reddy points out, currently you can't get the Cognito client secret using !GetAtt in your CloudFormation template. However, there is a way to avoid the manual step of using the AWS command line to get the secret. The AWS Command Runner utility for CloudFormation allows you to run AWS CLI commands from your CloudFormation templates, so you can run the CLI command to get the secret in the CloudFormation template and then use the output of the command elsewhere in your template using !GetAtt. Basically CommandRunner spins up an EC2 instance and runs the command you specify and saves the output of the command to a file on the instance while the CloudFormation template is running so that it can be retrieved later using !GetAtt. Note that CommandRunner is a special custom CloudFormation type that needs to be installed for the AWS account as a separate step. Below is an example CloudFormation template that will get a Cognito client secret and save it to AWS Secrets manager.
Resources:
CommandRunnerRole:
Type: AWS::IAM::Role
Properties:
# the AssumeRolePolicyDocument specifies which services can assume this role, for CommandRunner this needs to be ec2
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: CommandRunnerPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'logs:CancelUploadArchive'
- 'logs:GetBranch'
- 'logs:GetCommit'
- 'cognito-idp:*'
Resource: '*'
CommandRunnerInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref CommandRunnerRole
GetCognitoClientSecretCommand:
Type: AWSUtility::CloudFormation::CommandRunner
Properties:
Command: aws cognito-idp describe-user-pool-client --user-pool-id <user_pool_id> --region us-east-2 --client-id <client_id> --query UserPoolClient.ClientSecret --output text > /command-output.txt
Role: !Ref CommandRunnerInstanceProfile
InstanceType: "t2.nano"
LogGroup: command-runner-logs
CognitoClientSecret:
Type: AWS::SecretsManager::Secret
DependsOn: GetCognitoClientSecretCommand
Properties:
Name: "command-runner-secret"
SecretString: !GetAtt GetCognitoClientSecretCommand.Output
Note that you will need to replace the <user_pool_id> and <client_id> with your user pool and client pool id. A complete CloudFormation template would likely create the Cognito User Pool and User Pool Client and the user pool & client id values could be retrieved from those resources using !Ref as part of a !Join statement that creates the entire command, e.g.
Command: !Join [' ', ['aws cognito-idp describe-user-pool-client --user-pool-id', !Ref CognitoUserPool, '--region', !Ref AWS::Region, '--client-id', !Ref CognitoUserPoolClient, '--query UserPoolClient.ClientSecret --output text > /command-output.txt']]
One final note, depending on your operating system, the installation/registration of CommandRunner may fail trying to create the S3 bucket it needs. This is because it tries to generate a bucket name using uuidgen and will fail if uuidgen isn't installed. I have opened an issue on the CommandRunner GitHub repo for this. Until the issue is resolved, you can get around this by modifying the /scripts/register.sh script to use a static bucket name.
As it is still not possible to get the secret of a Cognito User Pool Client using !GetAtt in a CloudFormation Template I was looking for an alternative solution without manual steps so the infrastructure can get deployed automatically.
I like clav's solution but it requires the Command Runner to be installed first.
So, what I did in the end was using a Lambda-backed custom resource. I wrote it in JavaScript but you can also write it in Python.
Here is an overview of the 3 steps you need to follow:
Create IAM Policy and add it to the Lambda function execution role.
Add creation of In-Line Lambda function to CloudFormation Template.
Add creation of Lambda-backed custom resource to CloudFormation Template.
Get the output from the custom Ressource via !GetAtt
And here are the details:
Create IAM Policy and add it to the Lambda function execution role.
# IAM: Policy to describe user pool clients of Cognito user pools
CognitoDescribeUserPoolClientsPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: 'Allows describing Cognito user pool clients.'
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'cognito-idp:DescribeUserPoolClient'
Resource:
- !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/*'
If necessary only allow it for certain resources.
Add creation of In-Line Lambda function to CloudFormation Template.
# Lambda: Function to get the secret of a Cognito User Pool Client
LambdaFunctionGetCognitoUserPoolClientSecret:
Type: AWS::Lambda::Function
Properties:
FunctionName: 'GetCognitoUserPoolClientSecret'
Description: 'Lambda function to get the secret of a Cognito User Pool Client.'
Handler: index.lambda_handler
Role: !Ref LambdaFunctionExecutionRoleArn
Runtime: nodejs14.x
Timeout: '30'
Code:
ZipFile: |
// Import required modules
const response = require('cfn-response');
const { CognitoIdentityServiceProvider } = require('aws-sdk');
// FUNCTION: Lambda Handler
exports.lambda_handler = function(event, context) {
console.log("Request received:\n" + JSON.stringify(event));
// Read data from input parameters
let userPoolId = event.ResourceProperties.UserPoolId;
let userPoolClientId = event.ResourceProperties.UserPoolClientId;
// Set physical ID
let physicalId = `${userPoolId}-${userPoolClientId}-secret`;
let errorMessage = `Error at getting secret from cognito user pool client:`;
try {
let requestType = event.RequestType;
if(requestType === 'Create') {
console.log(`Request is of type '${requestType}'. Get secret from cognito user pool client.`);
// Get secret from cognito user pool client
let cognitoIdp = new CognitoIdentityServiceProvider();
cognitoIdp.describeUserPoolClient({
UserPoolId: userPoolId,
ClientId: userPoolClientId
}).promise()
.then(result => {
let secret = result.UserPoolClient.ClientSecret;
response.send(event, context, response.SUCCESS, {Status: response.SUCCESS, Error: 'No Error', Secret: secret}, physicalId);
}).catch(error => {
// Error
console.log(`${errorMessage}:${error}`);
response.send(event, context, response.FAILED, {Status: response.FAILED, Error: error}, physicalId);
});
} else {
console.log(`Request is of type '${requestType}'. Not doing anything.`);
response.send(event, context, response.SUCCESS, {Status: response.SUCCESS, Error: 'No Error'}, physicalId);
}
} catch (error){
// Error
console.log(`${errorMessage}:${error}`);
response.send(event, context, response.FAILED, {Status: response.FAILED, Error: error}, physicalId);
}
};
Make sure you pass the right Lambda Execution Role to the parameter Role. It should contain the policy created in step 1.
Add creation of Lambda-backed custom resource to CloudFormation Template.
# Custom: Cognito user pool client secret
UserPoolClientSecret:
Type: Custom::UserPoolClientSecret
Properties:
ServiceToken: !Ref LambdaFunctionGetCognitoUserPoolClientSecret
UserPoolId: !Ref UserPool
UserPoolClientId: !Ref UserPoolClient
Make sure you pass the Lambda function created in step 2 as ServiceToken. Also make sure you pass in the right values for the parameters UserPoolId and UserPoolClientId. They should be taken from the Cognito User Pool and the Cognito User Pool Client.
Get the output from the custom Ressource via !GetAtt
!GetAtt UserPoolClientSecret.Secret
You can do this anywhere you want.

How do I access the current user in a cloudformation template?

I want to create a KMS key using CloudFormation. I want to be able to provide the user executing the cloudformation YAML file (I'll call them "cloudformation-runner"), administrative access to the key they create.
I can setup the IAM policy to provide that user ("cloudformation-runner") access to the KMS Administrative APIs. However, for the user to be able to update/delete the key that was just created, I also need to specify a KeyPolicy that lets them do it. To do this, how can I get the current username ("cloudformation-runner") within the CloudFormation script?
Here is how my template for the KMS key looks, how do I get the current user as the principal?
MyKey:
Type: AWS::KMS::Key
Properties:
Description: "..."
KeyPolicy:
Version: "2012-10-17"
Id: "MyId"
Statement:
-
Sid: "Allow administration of the key"
Effect: "Allow"
Principal:
AWS:
- # TODO: Get Current User
Action:
- "kms:Create*"
- "kms:Describe*"
- "kms:Enable*"
- "kms:List*"
- "kms:Put*"
- "kms:Update*"
- "kms:Revoke*"
- "kms:Disable*"
- "kms:Get*"
- "kms:Delete*"
- "kms:ScheduleKeyDeletion"
- "kms:CancelKeyDeletion"
Resource: "*"
I can manually hardcode the ARN for the IAM user. However, that makes the template less portable - as people need to manually update the username within this file.
You can't access the current user once it can be an IAM Role running the CloudFormation template instead of an IAM User. But you can pass the username as a parameter.
I would like to give you an example that I think works well for your context:
The "cloudformation-runner" can be a Role instead of a user. This Role can have a policy giving privileges to create KMS keys.
The CloudFormation template can receive the IAM username and key name as parameters. From an IAM username, you can create the user ARN using CF functions.
Besides creating the KMS keys, the parameters can be used to create the IAM policy and attach to the user giving read/write privileges to the newly created key.
That way your key creation process can create the key and give user privileges at the same time.
Credit to this answer for the idea.
You can pass current user's ARN as CloudFormation parameter:
Parameters:
...
CallingUserArn:
Description: Calling user ARN
Type: String
Resources
KmsKey:
Type: AWS::KMS::Key
Properties:
...
KeyPolicy:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
AWS: !Ref CallingUserArn
...
aws cloudformation deploy \
--template-file cloudformation/stack.yml \
--stack-name my-stack \
--parameter-overrides CallingUserArn="$(aws sts get-caller-identity --query Arn --output text)"

How to enable IAM users to set the Name and other custom tags when limited by tag restricted resource-level permissions in EC2

I have been playing with configuring tag based resource permissions in EC2, using an approach similar to what is described in the answer to the following question: Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?
I have been using this in conjunction with a lambda function to auto tag EC2 instances, setting the Owner and PrincipalId based on the IAM user who called the associated ec2:RunInstances action. The approach I have been following for this is documented in the following AWS blog post: How to Automatically Tag Amazon EC2 Resources in Response to API Events
The combination of these two approaches has resulted in my restricted user permissions for EC2 looking like this, in my CloudFormation template:
LimitedEC2Policy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: UserLimitedEC2
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: ec2:RunInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetA}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetB}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetC}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:security-group/${BasicSSHAccessSecurityGroup.GroupId}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:key-pair/${AuthorizedKeyPair}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:network-interface/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:volume/*'
- !Sub 'arn:aws:ec2:${AWS::Region}::image/ami-*'
Condition:
StringLikeIfExists:
ec2:Vpc: !Ref Vpc
StringLikeIfExists:
ec2:InstanceType: !Ref EC2AllowedInstanceTypes
- Effect: Allow
Action:
- ec2:TerminateInstances
- ec2:StopInstances
- ec2:StartInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
Condition:
StringEquals:
ec2:ResourceTag/Owner: !Ref UserName
Users:
- !Ref IAMUser
These IAM permissions restricts users to running EC2 instances within a limited set of subnets, within a single VPC and security group. Users are then only able to start/stop/terminate instances which have been tagged with their IAM user in the Owner tag.
What I'd like to be able to do is allow users to also create and delete any additional tags on their EC2 resources, such as setting the Name tag. What I can't work out is how I can do this without also enabling them to change the Owner and PrincipalId tags on resources they don't "own".
Is there a way one can limit the ec2:createTags and ec2:deleteTags actions to prevent users from setting certain tags?
After much sifting through the AWS EC2 documentation I found the following: Resource-Level Permissions for Tagging
This gives the example:
Use with the ForAllValues modifier to enforce specific tag keys if
they are provided in the request (if tags are specified in the
request, only specific tag keys are allowed; no other tags are
allowed). For example, the tag keys environment or cost-center are
allowed:
"ForAllValues:StringEquals": { "aws:TagKeys": ["environment","cost-center"] }
Since what I want to achive is essentially the opposite of this (allow users to specify all tags, with the exception of specific tag keys) I have been able to prevent users from creating/deleting the Owner and PrincipalId tags by adding the following PolicyDocument statement to my user policy in my CloudFormation template:
- Effect: Allow
Action:
- ec2:CreateTags
- ec2:DeleteTags
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:*/*'
Condition:
"ForAllValues:StringNotEquals":
aws:TagKeys:
- "Owner"
- "PrincipalId"
This permits users to create/delete any tags they wish, so long as they aren't the Owner or PrincipalId.