In short, I want to enable cloud trail for several objects in different S3 buckets. I am able to directly mention all the objects when creating CloudTrail from CloudFormation. But i want to add them at later point in time.
Create an AWS CloudTrail trail in a CloudFormation stack and export the trail's ARN.
Then when creating objects in S3 bucket to which i need CloudTrail data events for, I want to add them as this existing CloudTrail.
Here is the spot in console where I can manually add it.
CloudTrail AWS Console
So, Looking to add data events to an existing CloudTrail via CloudFormation.
Looked entire documentation several times, I can only see a way to add while creating the CloudTrail:
Create a CloudWatch Events Rule for an Amazon S3 Source (AWS CloudFormation Template) - CodePipeline
Please advice what is the resource type that supports this?
you can probably get some hint from the CFT I have created - from an S3 Event probably an putObject operations logs the events details into an separate bucket from where using CloudWatch Events trigger the execution of the Step Function State Machine.
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz
When you deploy this CFT , it will update the existing Trail with CloudTrail data events as the Trigger Point.
Related
I am faced with the following situation:
There is an EC2 instance on say eu-west-1.
When selecting Snapshots on the EC2 service, I see that periodically, every 7 days on the exact same time, a snapshot is taken from the particular image.
The problem is I cannot find:
any related policy on Lifecycle Manager service
any relevant Lambda function that could carry out such a task.
Via what other (managed) means could such a process be carried out periodically with such an accuracy on time?
edit: The corresponding CloudTrail log entry is:
(actual values regarding user, event and request id have been scrambled of course)
AWS access key:
AWS region: eu-west-1
Error code:
Event ID: 454g0236-x4e6-43c1-3565-4xb6d541c2h1
Event name: CreateSnapshot
Event source: ec2.amazonaws.com
Event time: 2019-11-23, 05:00:44 AM
Read only: false
Request ID: zedfbc42-2513-459e-3241-ffcb8442ba44
Source IP address: events.amazonaws.com
User name: g45tg34m3l53mmm53333421knbb43
There are multiple other options,
Check Cloudwatch events, if there is any event triggering. Most probably this one is in your case.
Cronjob on an EC2 instance.
If i understood you question you are looking for a way to know if Lifecycle Manager is available for EC2 snapshots.
Below given links should be able to help you on the same.
For enabling a custom Snapshot Lifecycle policy manually refer Snapshot Lifecycle
For automating a solution for the same please referautomation of snapshot lifecycle
I have a Lambda, that copies data from Redshift to S3.
I am trying to find the logs in CloudWatch when I manually trigger the Lambda. I click logs and search under "log groups" and cannot see these.
I have enabled logs on Redshift and S3, and assume any Lambda generated has logs.
The end goal is to set up "log groups" per service so that I can subscribe through Kinesis and send the data to Redshift.
If I try to 'create a log group' under actions, I can create '/aws-s3/test' for example, but I don't know what log stream is, or how to send all S3 logs from a particular folder to S3.
Where are the logs?
The logs from the AWS Lambda function will be automatically created in Amazon CloudWach Logs.
However, you must ensure that the Lambda function has permission to use CloudWatch Logs.
This is normally done by assigning the AWSLambdaBasicExecutionRole managed policy to the AIM Role used by the Lambda function. It contains the permissions:
logs:CreateLogStream
logs:PutLogEvents
They will allow the Lambda function to create the log entries.
See: AWS Lambda Execution Role - AWS Lambda
I am trying to create a CloudFormation Template (CFT) for a S3 Bucket that needs to be "PublicRead" and that also has "Requester Pays" turned on.
I have looked at the documentation for S3 Bucket CFTs: AWS::S3::Bucket - AWS CloudFormation
Also I have looked at the documentation for "Requester Pays", but it fails to mention anything about CFTs. It only references enabling it through the console and with the REST API:
Requester Pays Buckets - Amazon Simple Storage Service
Right now we are trying to get all our infrastructure into infrastructure as code, but this is a somewhat large blocker for that. I have heard that other people have had trouble with CFTs not supporting some features from AWS services, but usually those are for unpopular/newer services. I would think that CFT would support all the options that S3 has for buckets.
You are correct. The CloudFormation AWS::S3::Bucket resources does not support Requester Pays.
The enable it, you would need to make an API call such as put_bucket_request_payment():
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download.
response = client.put_bucket_request_payment(
Bucket='string',
RequestPaymentConfiguration={
'Payer': 'Requester'|'BucketOwner'
}
)
This could be done by adding an AWS Lambda custom resource to the CloudFormation template, or by using the AWS CLI from an Amazon EC2 instance that is created as part of the stack.
I see that, in TaskDefinition properties, one can define 2 kind of roles: ExecutionRoleArn and TaskRole Arn
I tried to understand from documentation about both:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html
But still I don’t understand the reasoning behind having two? Can someone explain why it is done so this way?
TaskRoleArn refers to the arn of that role which will be utilized to call other AWS services or access AWS resources such as permissions needed for writing data into DynamoDB table, read information from any S3 bucket, etc.
ExecutionRoleArn refers to the arn of the role which will take care of permissions required for publishing logs to CloudWatch as well as push and pull Docker images from Amazon ECR.
I want to know if we can create a cloudwatch event that triggered on S3 bucket every time that a change occurs on S3. For example, if a file is uploaded on s3 we recieve an email.
I am using serverless framework, i found on the serverless documentation only stuff related to ec2, but not much things on s3. So please if anyone knows how to use cloudwatch with s3 i am all ears
https://serverless.com/framework/docs/providers/aws/events/s3/
e.g.
functions:
emailOnUpload:
handler: email.handler
events:
- s3:
bucket: photos
event: s3: ObjectCreated:*