AWS IAM Grant permission based on dynamodb attribute values - amazon-web-services

I have a dynamodb table which has following attribute values
| UserID | Name | paid |
|--------|------|-------|
| 0001 | Sam | false |
I have a IAM polivy document written in a serverless yml file as follows
- PolicyName: PaidPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "s3:PutObject"
Resource:
- Fn::Join:
- ""
- - arn:aws:s3:::uploads
- "/protected/*"
Is it possible to change the effect of the policy to Allow or Deny based on the paid column value
Note: I found that it is possible to add conditions using Condition attribute but could not find a way to refer the values of the dynamodb table

Quick search through this doc didn't amount to anything, so a solution I have in mind would be this:
Create user groups for paid and free users
Attach policies with allow and deny S3 actions to respective group
Create lambda function which will put user into said group based on column value in DynamoDB
Trigger lambda on DynamoDB table change via stream

Related

How to create s3 buckets dynamically in azure devops CI/CD pipeline

I want to automate the process of bucket creation through CI/CD pipeline based on the data mentioned in one of the yaml file. So, I have got bucket.yaml file which contains the name of all the buckets. This file keeps changing as more buckets names will be added in future. Currently, this is how bucket.yaml looks
BucketName:
- test-bucket
- test-bucket2
- test-bucket3
I have got one template.yaml file which is a cloudformation template for s3 buckets creation. Here is how it looks:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
Properties:
BucketName: This will come from bucket.yaml
Now, template.yaml will fetch the bucket names from bucket.yaml file and should create 3 buckets as mentioned in bucket.yaml. If someone adds 2 more buckets in bucket.yaml, then template.yaml should create those 2 new buckets as well. Also, if someone deletes any bucket name from bucket.yaml then those buckets should be deleted as well. I couldn't find out the process in my research, just found information in bits and pieces.So, here I have specific questions, if its possible to do:
How to fetch bucket names from bucket.yaml and template.yaml should create all the buckets.
If someone update/add/delete bucket name in bucket.yaml, template.yaml should update those accordingly.
Also, please explain how will I do it through CI/CD pipeline in Azure DevOps.
About your first question:
How to fetch bucket names from bucket.yaml and template.yaml should create all the buckets.
In bucket.yaml you can use Parameters to set up the BucketName.
For example:
parameters:
- name: BucketName
type: object
default:
- test-bucket
- test-bucket2
- test-bucket3
steps:
- ${{ each value in parameters.BucketName }}:
- script: echo ${{ value }}
The step in here can loop through the values of the parameter BucketName.
In the template.yaml you can call the bucket.yaml like as below.
trigger:
- main
extends:
template: bucket.yaml
For your second question:
If someone update/add/delete bucket name in bucket.yaml, template.yaml should update those accordingly.
There is no any easy way to do this. You can try to write a script to run in the pipeline to do the following things:
List all the buckets that have been created. This is the list of the the existing buckets.
Compare the list of the existing buckets with the values list of the parameter BucketName to check which buckets need to be added and which need to be deleted.
If a bucket is listed in the parameter but not in the existing buckets, this bucket should be created as a new bucket.
If a bucket is listed in the existing buckets but not in the parameter, this bucket should be deleted.
BucketName:
- test-bucket
- test-bucket2
- test-bucket3
The requirements imply that all S3 buckets will be created in the same way and that no deviation from the given Cloudformation template (AWS::S3::Bucket) is required.
The requirements require us to track what S3 buckets need to be deleted. Cloudformation will not delete the S3 buckets as the Cloudformation template snippet contains a DeletionPolicy of Retain.
Solution:
The S3 buckets can be tagged in a specific way to identify them as being owned by the current CI/CD pipeline. S3 buckets can be listed and all the S3 buckets that are tagged in the correct way, and yet, does not exist in bucket.yaml can then be deleted.
I would personally just create S3 buckets required by the CI/CD pipeline using the AWS SDK and manually manage the S3 bucket deletion. If an application requires a S3 bucket then they should create it themselves in their application's Cloudformation stack so that they can !Ref it and customize it the way they want (eg encryption at rest, versioning, lifecycle rules, etc).
Technical note:
For a S3 bucket to be deleted its contents will also need to be deleted. This will require us to list all the objects in the S3 bucket and then delete them. Some documentation for the Java SDK [here].
Only subsequently will the API call to delete the S3 bucket succeed.
You can get Cloudformation to delete your S3 objects using a custom resource. That said, I don't find the custom resources that fun to work with - so if you can use the AWS SDK inside your CI/CD pipeline I would probably just use that.
The custom resource to delete a bucket's contents might look something like this in Cloudformation: (Its a custom resource that kicks of a Lambda. The Lambda will delete the S3 bucket contents if the custom resource gets deprovisioned)
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-custom-resources-lambda-lookup-amiids.html
ExampleBucketOperationCustomResource:
Type: AWS::CloudFormation::CustomResource
DependsOn: [Bucket, ExampleBucketOperationLambdaFunction]
Properties:
ServiceToken: !GetAtt ExampleBucketOperationLambdaFunction.Arn
# Custom properties
BucketToUse: !Ref S3BucketName
ExampleBucketOperationLambdaFunctionExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: "ExampleBucketOperationLambda-ExecutionRole"
Path: "/"
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- lambda.amazonaws.com
Policies:
- PolicyName: "ExampleBucketOperationLambda-CanAccessCloudwatchLogs"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:*:*:*
- PolicyName: "ExampleBucketOperationLambda-S3BucketLevelPermissions"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}"
- PolicyName: "ExampleBucketOperationLambda-S3ObjectLevelPermissions"
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:DeleteObject
- s3:PutObject
Resource:
- !Sub "arn:aws:s3:::${S3BucketName}/*"
# Test payload:
# {"RequestType":"Create","ResourceProperties":{"BucketToUse":"your-bucket-name"}}
ExampleBucketOperationLambdaFunction:
Type: AWS::Lambda::Function
DependsOn: ExampleBucketOperationLambdaFunctionExecutionRole
# DeletionPolicy: Retain
Properties:
FunctionName: "ExampleBucketOperationLambda"
Role: !GetAtt ExampleBucketOperationLambdaFunctionExecutionRole.Arn
Runtime: python3.8
Handler: index.handler
Timeout: 30
Code:
ZipFile: |
import boto3
import cfnresponse
def handler(event, context):
eventType = event["RequestType"]
print("The event type is: " + str(eventType));
bucketToUse = event["ResourceProperties"]["BucketToUse"]
print("The bucket to use: " + str(bucketToUse));
try:
# Requires s3:ListBucket permission
if (eventType in ["Delete"]):
print("Deleting everyting in bucket: " + str(bucketToUse));
s3Client = boto3.client("s3")
s3Bucket = boto3.resource("s3").Bucket(bucketToUse)
for currFile in s3Bucket.objects.all():
print("Deleting file: " + currFile.key);
s3Client.delete_object(Bucket=bucketToUse, Key=currFile.key)
print("All done")
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData)
except Exception as e:
responseData = {}
errorDetail = "Exception: " + str(e)
errorDetail = errorDetail + "\n\t More detail can be found in CloudWatch Log Stream: " + context.log_stream_name
print(errorDetail)
cfnresponse.send(event=event, context=context, responseStatus=cfnresponse.FAILED, responseData=responseData, reason=errorDetail)
Thanks for the above answers. I took a different path to solve this issue. I used AWS CDK to implement what I exactly wanted. I personally used AWS CDK for Python and created infrastructure using that.

Create IAM account with CloudFormation

I want to create an AWS IAMS account that has various permissions with CloudFormation.
I understand there are policies that would let a user change his password and let him get his account to use MFA here
How could I enforce the user to use MFA at first log in time when he needs to change the default password?
This is what I have:
The flow I have so far is:
User account is created
When user tries to log in for the first time is asked to change the default password.
User is logged in the AWS console.
Expected behavior:
User account is created
When user tries to log in for the first time is asked to change the default password and set MFA using Authenticator app.
User is logged in the AWS console and has permissions.
A potential flow is shown here. Is there another way?
Update:
This blog explains the flow
Again, is there a better way? Like an automatic pop up that would enforce the user straight away?
Update2:
I might have not been explicit enough.
What we have so far it is an ok customer experience.
This flow would be fluid
User tries to log in
Console asks for password change
Colsole asks for scanning the code and introducing the codes
User logs in with new password and the code from authenticator
5.User is not able to deactivate MFA
Allow users to self manage MFA is the way to go, if you are using regular IAM. You can try AWS SSO, it's easier to manage and free.
Allowing users to login, change password, setup MFA and Denying everything other than these if MFA is not setup as listed here
We could create an IAM Group with an inline policy and assign users to that group.
This is CF for policy listed in the docs.
Resources:
MyIamGroup:
Type: AWS::IAM::Group
Properties:
GroupName: My-Group
MyGroupPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- iam:GetAccountPasswordPolicy
- iam:GetAccountSummary
- iam:ListVirtualMFADevices
- iam:ListUsers
Effect: Allow
Resource: "*"
- Action:
- iam:ChangePassword
- iam:GetUser
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:user/${aws:username}
- Action:
- iam:CreateVirtualMFADevice
- iam:DeleteVirtualMFADevice
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:mfa/${aws:username}
- Action:
- iam:DeactivateMFADevice
- iam:EnableMFADevice
- iam:ListMFADevices
- iam:ResyncMFADevice
Effect: Allow
Resource:
Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::1234567891111:user/${aws:username}
- NotAction:
- iam:CreateVirtualMFADevice
- iam:EnableMFADevice
- iam:GetUser
- iam:ListMFADevices
- iam:ListVirtualMFADevices
- iam:ListUsers
- iam:ResyncMFADevice
- sts:GetSessionToken
Condition:
BoolIfExists:
aws:MultiFactorAuthPresent: "false"
Effect: Deny
Resource: "*"
PolicyName: My-Group-Policy
Groups:
- Ref: MyIamGroup
I think this is the way to go and one could extract the knowledge of creating users with whatever permissions he wants after the user sets up the MFA.
The policy template it is useful.
instructions

How to create KMS asymmetric signing key resource with Cloudformation?

I've tried the following resource in my template:
SigningKey:
Type: AWS::KMS::Key
Properties:
Description: "Auth API signing key"
Enabled: true
# Grant all permissions for root account
KeyPolicy:
Version: "2012-10-17"
Id: "key-default-1"
Statement:
-
Sid: "Enable IAM User Permissions"
Effect: "Allow"
Principal:
- AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
Action: "kms:*"
Resource: "*"
EnableKeyRotation: true
KeyUsage: SIGN_VERIFY
But this gives an error:
The operation failed because the KeyUsage value of the CMK is
SIGN_VERIFY. To perform this operation, the KeyUsage value must be
ENCRYPT_DECRYPT.
It's also unclear where to specify the key type (eg. RSA_2048) in the template from the docs.
According to AWS CloudFormation, you specify key type in KeySpec field. You can also see what types are currently supported in the document. Also, AWS KMS does not support automatic key rotation on asymmetric CMKs. For asymmetric CMKs, omit the EnableKeyRotation property or set it to false.
Above doc also has example to create asymmetric CMKs that you can refer.

How do I access the current user in a cloudformation template?

I want to create a KMS key using CloudFormation. I want to be able to provide the user executing the cloudformation YAML file (I'll call them "cloudformation-runner"), administrative access to the key they create.
I can setup the IAM policy to provide that user ("cloudformation-runner") access to the KMS Administrative APIs. However, for the user to be able to update/delete the key that was just created, I also need to specify a KeyPolicy that lets them do it. To do this, how can I get the current username ("cloudformation-runner") within the CloudFormation script?
Here is how my template for the KMS key looks, how do I get the current user as the principal?
MyKey:
Type: AWS::KMS::Key
Properties:
Description: "..."
KeyPolicy:
Version: "2012-10-17"
Id: "MyId"
Statement:
-
Sid: "Allow administration of the key"
Effect: "Allow"
Principal:
AWS:
- # TODO: Get Current User
Action:
- "kms:Create*"
- "kms:Describe*"
- "kms:Enable*"
- "kms:List*"
- "kms:Put*"
- "kms:Update*"
- "kms:Revoke*"
- "kms:Disable*"
- "kms:Get*"
- "kms:Delete*"
- "kms:ScheduleKeyDeletion"
- "kms:CancelKeyDeletion"
Resource: "*"
I can manually hardcode the ARN for the IAM user. However, that makes the template less portable - as people need to manually update the username within this file.
You can't access the current user once it can be an IAM Role running the CloudFormation template instead of an IAM User. But you can pass the username as a parameter.
I would like to give you an example that I think works well for your context:
The "cloudformation-runner" can be a Role instead of a user. This Role can have a policy giving privileges to create KMS keys.
The CloudFormation template can receive the IAM username and key name as parameters. From an IAM username, you can create the user ARN using CF functions.
Besides creating the KMS keys, the parameters can be used to create the IAM policy and attach to the user giving read/write privileges to the newly created key.
That way your key creation process can create the key and give user privileges at the same time.
Credit to this answer for the idea.
You can pass current user's ARN as CloudFormation parameter:
Parameters:
...
CallingUserArn:
Description: Calling user ARN
Type: String
Resources
KmsKey:
Type: AWS::KMS::Key
Properties:
...
KeyPolicy:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
AWS: !Ref CallingUserArn
...
aws cloudformation deploy \
--template-file cloudformation/stack.yml \
--stack-name my-stack \
--parameter-overrides CallingUserArn="$(aws sts get-caller-identity --query Arn --output text)"

How to enable IAM users to set the Name and other custom tags when limited by tag restricted resource-level permissions in EC2

I have been playing with configuring tag based resource permissions in EC2, using an approach similar to what is described in the answer to the following question: Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?
I have been using this in conjunction with a lambda function to auto tag EC2 instances, setting the Owner and PrincipalId based on the IAM user who called the associated ec2:RunInstances action. The approach I have been following for this is documented in the following AWS blog post: How to Automatically Tag Amazon EC2 Resources in Response to API Events
The combination of these two approaches has resulted in my restricted user permissions for EC2 looking like this, in my CloudFormation template:
LimitedEC2Policy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: UserLimitedEC2
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: ec2:RunInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetA}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetB}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:subnet/${PrivateSubnetC}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:security-group/${BasicSSHAccessSecurityGroup.GroupId}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:key-pair/${AuthorizedKeyPair}'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:network-interface/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:volume/*'
- !Sub 'arn:aws:ec2:${AWS::Region}::image/ami-*'
Condition:
StringLikeIfExists:
ec2:Vpc: !Ref Vpc
StringLikeIfExists:
ec2:InstanceType: !Ref EC2AllowedInstanceTypes
- Effect: Allow
Action:
- ec2:TerminateInstances
- ec2:StopInstances
- ec2:StartInstances
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*'
Condition:
StringEquals:
ec2:ResourceTag/Owner: !Ref UserName
Users:
- !Ref IAMUser
These IAM permissions restricts users to running EC2 instances within a limited set of subnets, within a single VPC and security group. Users are then only able to start/stop/terminate instances which have been tagged with their IAM user in the Owner tag.
What I'd like to be able to do is allow users to also create and delete any additional tags on their EC2 resources, such as setting the Name tag. What I can't work out is how I can do this without also enabling them to change the Owner and PrincipalId tags on resources they don't "own".
Is there a way one can limit the ec2:createTags and ec2:deleteTags actions to prevent users from setting certain tags?
After much sifting through the AWS EC2 documentation I found the following: Resource-Level Permissions for Tagging
This gives the example:
Use with the ForAllValues modifier to enforce specific tag keys if
they are provided in the request (if tags are specified in the
request, only specific tag keys are allowed; no other tags are
allowed). For example, the tag keys environment or cost-center are
allowed:
"ForAllValues:StringEquals": { "aws:TagKeys": ["environment","cost-center"] }
Since what I want to achive is essentially the opposite of this (allow users to specify all tags, with the exception of specific tag keys) I have been able to prevent users from creating/deleting the Owner and PrincipalId tags by adding the following PolicyDocument statement to my user policy in my CloudFormation template:
- Effect: Allow
Action:
- ec2:CreateTags
- ec2:DeleteTags
Resource:
- !Sub 'arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:*/*'
Condition:
"ForAllValues:StringNotEquals":
aws:TagKeys:
- "Owner"
- "PrincipalId"
This permits users to create/delete any tags they wish, so long as they aren't the Owner or PrincipalId.