I want to know if we can create a cloudwatch event that triggered on S3 bucket every time that a change occurs on S3. For example, if a file is uploaded on s3 we recieve an email.
I am using serverless framework, i found on the serverless documentation only stuff related to ec2, but not much things on s3. So please if anyone knows how to use cloudwatch with s3 i am all ears
https://serverless.com/framework/docs/providers/aws/events/s3/
e.g.
functions:
emailOnUpload:
handler: email.handler
events:
- s3:
bucket: photos
event: s3: ObjectCreated:*
Related
I'm using AWS and I tried to generate a policy using access analyzer. The generated policy never contains the items I expect and am interested in most. I cannot figure out why. Moreover, the events I can see in the cloudtrail event logs do not include data events even though I've configured data events.
I have executed the following action
DynamoDB CreateTable aws dynamodb create-table --tablename ....
DynamoDB PutItem aws dynamodb put-item --table-name xxx --item file://contents.json
S3 list aws s3 ls s3://mygreatbucket
S3 download aws s3 cp s3://mygreatbucket/theevengreater/file .
The only relevant event that is being logged in the cloudtrail is the create-table event. The data events are missing. I can't figure out what I'm doing wrong. The cloud trail config says in the "data events" section "Log All Events" for both S3 and DynamoDB.
I followed the instructions in https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html. I know it says Data events not available – IAM Access Analyzer does not identify action-level activity for data events, such as Amazon S3 data events, in generated policies. but if so, what is the purpose of adding "Logging of data events" to the cloudtrail configuration?
I have a Lambda, that copies data from Redshift to S3.
I am trying to find the logs in CloudWatch when I manually trigger the Lambda. I click logs and search under "log groups" and cannot see these.
I have enabled logs on Redshift and S3, and assume any Lambda generated has logs.
The end goal is to set up "log groups" per service so that I can subscribe through Kinesis and send the data to Redshift.
If I try to 'create a log group' under actions, I can create '/aws-s3/test' for example, but I don't know what log stream is, or how to send all S3 logs from a particular folder to S3.
Where are the logs?
The logs from the AWS Lambda function will be automatically created in Amazon CloudWach Logs.
However, you must ensure that the Lambda function has permission to use CloudWatch Logs.
This is normally done by assigning the AWSLambdaBasicExecutionRole managed policy to the AIM Role used by the Lambda function. It contains the permissions:
logs:CreateLogStream
logs:PutLogEvents
They will allow the Lambda function to create the log entries.
See: AWS Lambda Execution Role - AWS Lambda
Here's a snippet of my YAML file:
functions:
users:
handler: incoming.handler
events:
- s3:
bucket: mybucket
event: s3:ObjectCreated:*
rules:
- prefix: incoming/
- suffix: .zip
This will match for objects created named incoming/*.zip.
I want to add another key with date time for easy partitioning using Athena. So the full "path" in S3 will be something like this:
incoming/date=<timestamp>/<filename>.zip
Using the rules defined above do not seem to match with the result that the function does not fire.
How do I match using this technique?
If you cannot get this to work using pure Serverless framework functionality, you could try an alternative approach:
Setup the bucket so that SNS notification is triggered on object creation. You can then hook up your Serverless lambda to trigger from SNS.
See the docs here: https://docs.aws.amazon.com/en_pv/AmazonS3/latest/dev/ways-to-add-notification-config-to-bucket.html
Then use SNS filtering to look for .zip (or whatever else you want to use) to trigger only on the files you are interested in: https://serverless.com/framework/docs/providers/aws/events/sns/
good luck!
In short, I want to enable cloud trail for several objects in different S3 buckets. I am able to directly mention all the objects when creating CloudTrail from CloudFormation. But i want to add them at later point in time.
Create an AWS CloudTrail trail in a CloudFormation stack and export the trail's ARN.
Then when creating objects in S3 bucket to which i need CloudTrail data events for, I want to add them as this existing CloudTrail.
Here is the spot in console where I can manually add it.
CloudTrail AWS Console
So, Looking to add data events to an existing CloudTrail via CloudFormation.
Looked entire documentation several times, I can only see a way to add while creating the CloudTrail:
Create a CloudWatch Events Rule for an Amazon S3 Source (AWS CloudFormation Template) - CodePipeline
Please advice what is the resource type that supports this?
you can probably get some hint from the CFT I have created - from an S3 Event probably an putObject operations logs the events details into an separate bucket from where using CloudWatch Events trigger the execution of the Step Function State Machine.
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz
When you deploy this CFT , it will update the existing Trail with CloudTrail data events as the Trigger Point.
I am trying to create a CloudFormation Template (CFT) for a S3 Bucket that needs to be "PublicRead" and that also has "Requester Pays" turned on.
I have looked at the documentation for S3 Bucket CFTs: AWS::S3::Bucket - AWS CloudFormation
Also I have looked at the documentation for "Requester Pays", but it fails to mention anything about CFTs. It only references enabling it through the console and with the REST API:
Requester Pays Buckets - Amazon Simple Storage Service
Right now we are trying to get all our infrastructure into infrastructure as code, but this is a somewhat large blocker for that. I have heard that other people have had trouble with CFTs not supporting some features from AWS services, but usually those are for unpopular/newer services. I would think that CFT would support all the options that S3 has for buckets.
You are correct. The CloudFormation AWS::S3::Bucket resources does not support Requester Pays.
The enable it, you would need to make an API call such as put_bucket_request_payment():
Sets the request payment configuration for a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner (only) to specify that the person requesting the download will be charged for the download.
response = client.put_bucket_request_payment(
Bucket='string',
RequestPaymentConfiguration={
'Payer': 'Requester'|'BucketOwner'
}
)
This could be done by adding an AWS Lambda custom resource to the CloudFormation template, or by using the AWS CLI from an Amazon EC2 instance that is created as part of the stack.