Enable object logging on s3 bucket via cloudformation - amazon-web-services

In AWS S3, you have the ability to visit the console and add 'Object-level logging' to a bucket. You create or select a pre-existing trail and select read and write log types.
Now I am creating buckets via Yaml CloudFormation and want to add a pre-existing trail (or create a new one) to these too. How do I do that? I can't find any examples.

Object logging for S3 buckets with CloudTrail is done by defining so called event selectors for data events in CloudTrail. That is available through CloudFormation as well. The following CloudFormation template shows how that's done. The important part is in the lower half (the upper half is just for setting up an S3 bucket CloudTrail can log to):
AWSTemplateFormatVersion: "2010-09-09"
Resources:
s3BucketForTrailData:
Type: "AWS::S3::Bucket"
trailBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref s3BucketForTrailData
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: "cloudtrail.amazonaws.com"
Action: "s3:GetBucketAcl"
Resource: !Sub "arn:aws:s3:::${s3BucketForTrailData}"
- Effect: Allow
Principal:
Service: "cloudtrail.amazonaws.com"
Action: "s3:PutObject"
Resource: !Sub "arn:aws:s3:::${s3BucketForTrailData}/AWSLogs/${AWS::AccountId}/*"
Condition:
StringEquals:
"s3:x-amz-acl": "bucket-owner-full-control"
s3BucketToBeLogged:
Type: "AWS::S3::Bucket"
cloudTrailTrail:
Type: "AWS::CloudTrail::Trail"
DependsOn:
- trailBucketPolicy
Properties:
IsLogging: true
S3BucketName: !Ref s3BucketForTrailData
EventSelectors:
- DataResources:
- Type: "AWS::S3::Object"
Values:
- "arn:aws:s3:::" # log data events for all S3 buckets
- !Sub "${s3BucketToBeLogged.Arn}/" # log data events for the S3 bucket defined above
IncludeManagementEvents: true
ReadWriteType: All
For more details check out the CloudFormation documentation for CloudTrail event selectors.

Though same only but this is how I have done it .
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz

Related

Deploy AWS Lambda with function URL via Cloudformation

Since a few days, AWS Lambdas can be exposed as web services directly without an API Gateway.
This works fine when setting up through the UI console, but I can’t seem to get it done with Cloudformation, because the resource policy is not attached with AuthType: NONE. And without the policy, I get "message": "Forbidden" from AWS when trying to access the Lambda through the function url.
My Lambda is the following:
exports.handler = async event => {
return {
statusCode: 200,
body: JSON.stringify("Hello World")
}
}
and here’s the CFN template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
stackName:
Type: String
lambdaFile:
Type: String
lambdaBucket:
Type: String
Resources:
lambdaRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Action:
- "sts:AssumeRole"
Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Policies:
- PolicyDocument:
Version: "2012-10-17"
Statement:
- Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Effect: "Allow"
Resource:
- !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${stackName}:*"
PolicyName: "lambda"
runtimeLambdaFunction:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: !Ref lambdaBucket
S3Key: !Ref lambdaFile
Environment:
Variables:
NODE_ENV: production
FunctionName: !Sub "${stackName}-runtime"
Handler: runtime.handler
MemorySize: 128
Role: !GetAtt lambdaRole.Arn
Runtime: "nodejs14.x"
Timeout: 5
lambdaLogGroup:
Type: "AWS::Logs::LogGroup"
Properties:
LogGroupName: !Sub "/aws/${stackName}"
RetentionInDays: 30
runtimeLambdaUrl:
Type: "AWS::Lambda::Url"
Properties:
AuthType: NONE
TargetFunctionArn: !Ref runtimeLambdaFunction
Outputs:
runtimeLambdaUrl:
Value: !GetAtt runtimeLambdaUrl.FunctionUrl
The interesting thing is that I can add the policy through the UI console, and then it works.
Here’s the initial config screen for the function URL right after CFN deployment:
This is what I see when pushing the “Edit” button:
After clicking “Save”, I get the following (note the blue box):
Also, when I go into “Edit” mode again, I now see the following:
After that, the function can be accessed via its URL.
I tried to add the policy into my CFN stack, either standalone as AWS::IAM::Policy, but then it is not a resource-based policy or as an additional action on the lambdaRole. But in either case, I can’t add a Principal and the policy doesn’t have an effect.
Does anybody know how I can make a pure Clouformation deployment for a Lambda with a function URL? Or is this a bug in Cloudformation and/or Lambda?
Your template is missing AWS::Lambda::Permission, thus its does not work. You already know what the permissions should be based on AWS console inspection, so you have to recreate those permissions using AWS::Lambda::Permission. This allows you to specify FunctionUrlAuthType.

Unable to validate the following destination configurations in Cloudformation for S3 to SQS notifications with a configured SQS policy

I get the status UPDATE_FAILED for an S3 bucket with logical ID MyBucket explained by the following status reason in the cloudformation console:
Unable to validate the following destination configurations (Service:
Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID:
ABCDEFGHIJK; S3 Extended Request ID:
Aqd2fih3ro981DED8wq48io9e51rSD5e3Fo3iw5ue31br;
Proxy: null)
I have the following CloudFormation template:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
NotificationConfiguration:
QueueConfigurations:
- Event: s3:ObjectCreated:Put
Filter:
S3Key:
Rules:
- Name: suffix
Value: jpg
Queue: !GetAtt MyQueue.Arn
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: my-queue
KmsMasterKeyId: alias/an-encryption-key
MyQueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- !Ref MyQueue
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
Action: SQS:SendMessage
Resource: !GetAtt MyQueue.Arn
EncryptionKey:
Type: AWS::KMS::Key
Properties:
KeyPolicy:
Version: '2012-10-17'
Id: some-id
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
Action: "kms:*"
Resource: '*'
KeyUsage: ENCRYPT_DECRYPT
EncryptionKeyAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: alias/an-encryption-key
TargetKeyId: !Ref EncryptionKey
What changes should I perform on the template in order to make the CloudFormation stack succeed?
The status reason is too vague for me to understand what is going wrong.
I know that it is related to the notification configuration because CloudFormation succeeds if I remove it.
Other similar posts on Stackoverflow mention a missing or inaccurate queue policy, but since I have a queue policy I do not think that that is the problem.
The problem is that since server side encryption is enabled on the queue, S3 should be able to:
let KMS generate an appropriate datakey
be able to decrypt using the EncryptionKey
Add a statement with the S3 service as principal that allows the abovementioned actions :
- Effect: Allow
Principal:
Service: s3.amazonaws.com
Action:
- kms:GenerateDataKey
- kms:Decrypt
Resource: "*"

How to migrate sns and sqs using cloudformation?

In my aws account, there is having a 250+ SNS and SQS i need to migrate one region to another region using cloudformation, any one can help to write a script using yaml
Resources:
T1:
Type: 'AWS::SNS::Topic'
Properties: {}
Q1:
Type: 'AWS::SQS::Queue'
Properties: {}
Q1P:
Type: 'AWS::SQS::QueuePolicy'
Properties:
Queues:
- !Ref Q1
PolicyDocument:
Id: AllowIncomingAccess
Statement:
-
Effect: Allow
Principal:
AWS:
- !Ref AWS::AccountId
Action:
- sqs:SendMessage
- sqs:ReceiveMessage
Resource:
- !GetAtt Q1.Arn
-
Effect: Allow
Principal: '*'
Action:
- sqs:SendMessage
Resource:
- !GetAtt Q1.Arn
T1SUB:
Type: 'AWS::SNS::Subscription'
Properties:
Protocol: sqs
Endpoint: !GetAtt Q1.Arn
TopicArn: !Ref T1
You can try using Former2 which is an open-sourced tool to:
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account. By making the relevant calls using the AWS JavaScript SDK, Former2 will scan across your infrastructure and present you with the list of resources for you to choose which to generate outputs for.

Cloud Formation: separate cloudformation template of S3 bucket and Lambda

I have created a cloudformation template to configure a S3 bucket with an event notification that will call a lambda function. The lamba is triggered whenever a new object is created in the bucket.
The problem I have is when I delete the stack the bucket is also deleted. For debugging and testing purpose I had to delete the stack.
AWSTemplateFormatVersion: '2010-09-09'
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function: !GetAtt ImageProcessor.Arn
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref ImageProcessor
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
To resolve this, I separated the two resources on separate template using Outputs. Problem with this is that I cannot delete Lambda function stack because it is being referenced by the Bucket stack.
I want to know what is the right approach. Is it really required to separate these two resources. I believe lambda function is required to be changed frequently.
If yes what is the correct way to do it.
If not how should I handle the necessity to makes changes.
The approach using Outputs and Imports will always create the dependencies and will not allow to delete. This is a generic behavior in any resources. How do we deal with deleting in this case.Is it good to use this approach
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Outputs:
ImageProcessingARN:
Description: ARN of the function
Value:
Fn::Sub: ${ImageProcessor.Arn}
Export:
Name: ImageProcessingARN
ImageProcessingName:
Description: Name of the function
Value: !Ref ImageProcessor
Export:
Name: ImageProcessingName
AWSTemplateFormatVersion: '2010-09-09'
Description: Test
Parameters:
BucketName:
Description: Name of the bucket
Type: String
Default: imageprocess-bucket
Resources:
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function:
Fn::ImportValue: ImageProcessingARN
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName:
Fn::ImportValue: ImageProcessingName
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
There is no such thing as the right approach, it almost always depends on your unique situation. Strictly speaking it is not required to separate the resources in different CloudFormation templates. A lambda function that changes a lot is also not a sufficient reason for separating the resources.
You seem to be separating the resources correctly in two different stacks. You just do not like that you have to delete the S3 bucket first because it makes debugging more difficult.
If my assumption is correct that you want to delete or update the Lambda CloudFormation stack frequently while not wanting to delete S3 bucket, then there are at least 2 solutions to this problem:
Put a Deletion Policy and an UpdateReplacePolicy on your S3 bucket. By adding these policies you can delete the CloudFormation stack, while keeping the S3 bucket. This will allow you to keep the S3 bucket and the Lambda function in one CloudFormation Template. To create the new stack again, remove the S3 Bucket Resource from the template and manually import the resource back into the CloudFormation stack later.
Use Queue Configuration as Notification Configuration. This is a good approach if you plan on separating the CloudFormation Template in a S3 bucket template and a Lambda function template (a decision based on frequency of change and dependencies between the two templates). Put an SQS queue in the S3 bucket template. Create the CloudFormation stack based on the S3 bucket template. Use the SQS arn (as a CloudFormation template configuration parameter or use the ImportValue intrinsic function) in the Lambda function stack and let SQS trigger the Lambda function. I think this is the best approach since you can now remove the Lambda function stack without having to delete the S3 bucket stack. This way you effectively reduce the coupling between the two CloudFormation stacks since you make the SQS in the S3 bucket stack unaware of potential Lambda function listeners.
4: I think that it is still possible to delete the S3 bucket CloudFormation stack first and delete the Image Processing Lambda CloudFormation stack second. Although I assume this is not something you typically want to do.

Kinesis Firehose Delivery Stream is unable to assume role while using cloudformation

I am writing a cloudformation template that creates a Kinesis Firehose Delivery Stream and sends the data to S3 bucket. The source stream is a Kinesis Steam. It creates the s3 bucket, Policies, and roles but when it tries to create the Kinesis Firehose Delivery Stream, it fails saying unable to assume role
After some research i found that Delivery should not be created using the root account. I tried creating a new user but it still gave me the same error.
# creates the Kinesis Stream
KinesisStream:
Type: AWS::Kinesis::Stream
Properties:
Name: HealthApp
RetentionPeriodHours: 24
ShardCount: 8
# creates the firehose delivery stream
KinesisFirehoseDeliveryStream:
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamName: HealthAppFirehose
DeliveryStreamType: KinesisStreamAsSource
KinesisStreamSourceConfiguration:
KinesisStreamARN:
Fn::GetAtt:
- KinesisStream
- Arn
RoleARN:
Fn::GetAtt:
- FirehoseDeliveryIAMRole
- Arn
S3DestinationConfiguration:
BucketARN: !GetAtt MyS3Bucket.Arn
Prefix: cloudformation-test/kinesis-fh
BufferingHints:
IntervalInSeconds: 60
SizeInMBs: 100
CloudWatchLoggingOptions:
Enabled: 'false'
CompressionFormat: UNCOMPRESSED
RoleARN:
Fn::GetAtt:
- FirehoseDeliveryIAMRole
- Arn
DependsOn:
- FirehoseDeliveryIAMPolicy
FirehoseDeliveryIAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
Effect: Allow
Principal:
Service: firehose.amazonaws.com
Action: sts:AssumeRole
Condition:
StringEquals:
sts:ExternalId: ACCOUNT_NUMBER
FirehoseDeliveryIAMPolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: HealthAppPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:AbortMultipartUpload
- s3:GetBucketLocation
- s3:GetObject
- s3:ListBucket
- s3:ListBucketMultipartUploads
- s3:PutObject
Resource:
- arn:aws:s3:::health-app-bucket/cloudformation-test/kinesis-fh*
- Effect: Allow
Action:
- kinesis:DescribeStream
- kinesis:GetShardIterator
- kinesis:GetRecords
Resource:
Fn::GetAtt:
- KinesisStream
- Arn
Roles:
- Ref: FirehoseDeliveryIAMRole
DependsOn:
- KinesisStream
​
Outputs:
kinesisStreamArn:
Description: Kinesis Stream ARN
Value:
Fn::GetAtt:
- KinesisStream
- Arn
firehoseDeliveryStreamArn:
Description: Firehose Delivery Stream ARN
Value:
Fn::GetAtt:
- KinesisFirehoseDeliveryStream
- Arn
firehoseDeliveryRoleArn:
Description: Firehose Delivery Role ARN
Value:
Fn::GetAtt:
- FirehoseDeliveryIAMRole
- Arn
I want the delivery stream to succesfully be created. Any help would be appreciated.
Thank you
Two things to check for:
I wonder if ACCOUNT_NUMBER is being set and interpreted properly. You can check this by removing the entire Condition statement as a test. As a test (not for production) remove the following and see if it works
Condition:
StringEquals:
sts:ExternalId: ACCOUNT_NUMBER
Does the user you created have access to create IAM policies and roles? You said
I tried creating a new user but it still gave me the same error
Again, for testing/debugging only you can give this user the following policy and see if it works
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "iam:*",
"Resource": "*"
}
}
If that is the problem this use this to determine the actual polices needed for your IAM User that is executing the CloudFormation