How to set metadata of a S3 object on serverless.yml - amazon-web-services

AWS s3 allows setting the metadata of a S3 object in the console as described on this page: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-object-metadata.html.
How can I configure that on the serverless.yml?

I figured it out. It can be done with serverless-s3-sync. https://yingzuo.io/set-metadata-of-s3-objects-on-serverless-yml

Related

Update S3 bucket policy for a Amplify-generated S3 via CloudFormation

Is it possible to update or insert a new S3 bucket policy to a Amplify-generated S3 using amplify override storage via CloudFormation? The documentation doesn't provide enough information on this. https://docs.amplify.aws/cli/storage/override/
You can't override the S3 bucket policy using
amplify override storage
A list of the properties you can override can be found at class CfnBucket (construct)
I think the closest you could get is to supply a canned acl as part of the overide...
From the command line type:
amplify override storage
This will show:
✅ Successfully generated "override.ts" folder at C:\myProject\amplify\backend\storage\staticData
√ Do you want to edit override.ts file now? (Y/n) · yes
Press Return to choose yes then update the override.ts file with the following:
import { AmplifyS3ResourceTemplate } from '#aws-amplify/cli-extensibility-helper'
export function override(resources: AmplifyS3ResourceTemplate) {
resources.s3Bucket.accessControl = 'public-read'
}
You could then change public-read to any one of the canned acls
You then need to update the backend using:
amplify push
For anyone coming to this looking to override properties on an amplify created S3 bucket, a possibly more useful answer, overriding the lifecycle policy, can be found here

AWS S3 file recently uploaded not updated

I upload a file through AWS Console on S3, and I see there but it's not being updated unless I execute this command on CLI:
aws cloudfront create-invalidation --distribution-id E1XXXXXXX --paths "/*"
Where E1XXXXXXX is the ID from the CloudFront distribution.
I have a user that will not use the CLI, only has access to Console and only S3, so he just can do 2 things:
upload files to some bucket
delete files from that bucket
But how can I do in order to get refreshed/updated the file that he is uploading/replacing, without that command on CLI?
Or how can I change the TTL on CloudFront but for an specific Bucket? by default I see a policy with this:
Assuming you have a behavior set up that maps your distribution to the S3-origin, you should be able to set your default TTL there. It's the one that will be set for S3-Content.
If that doesn't work, you can attach a Lambda function to the S3 object create event and create an invalidation for changed objects.

How do you enable S3 Object Logging to Cloud Trail using AWS CLI?

Its possible to do object logging on a S3 bucket to Cloud trail using the following guide, but this is through the console.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
I've been trying to figure out a way to do this via the cli since want to do this for many buckets but haven't had much luck. I've setup a new cloud trail on my account and would like to map it to s3 buckets to do object logging. Is there a cli for this?
# This is to grant s3 log bucket access (no link to cloudtrail here)
aws s3api put-bucket-logging
It looks like you'll need to use the CloudTrail put_event_selectors() command:
DataResources
CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda functions.
(dict): The Amazon S3 buckets or AWS Lambda functions that you specify in your event selectors for your trail to log data events.
Do a search for object-level in the documentation page.
Disclaimer: The comment by puji in the accepted answer works. This is an expansion of that answer with the resources.
Here is the AWS documentation on how to do this through the AWS CLI
https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-event-selectors.html
The specific CLI command you are interested is the following from the above documentation. The original documentation lists two objects in the same bucket. I have modified it to cover all the objects in two buckets.
aws cloudtrail put-event-selectors --trail-name TrailName --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::mybucket1/","arn:aws:s3:::mybucket2/"]}]}]'
If you want all the S3 buckets in your AWS accounts covered you can use arn:aws:s3::: instead of list of bucket arns like the following.
aws cloudtrail put-event-selectors --trail-name TrailName2 --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::"]}]}]'

Can I disable autocreate S3 Bucket in Zappa Init?

I want to get started with zappa framework, but when I'm running init zappa I got this notification:
How to disable autocreate s3 bucket?
You can't. Zappa first uploads your .zip file to a bucket and from there does the deployment.
From the official repo:
Zappa will automatically package up your application, [...] upload the archive to S3, create and manage the necessary Amazon IAM policies and roles, register it as a new Lambda function, create a new API Gateway resource, create WSGI-compatible routes for it, link it to the new Lambda function, and finally delete the archive from your S3 bucket. Handy!
So your option is to dig into Zappa and circumvent this on your own or perhaps try Chalice that does the upload directly.
#mislav is correct that Zappa does need an S3 bucket. But one only gets auto-created if you don't specify one. Simply provide a valid bucket name at the prompt, and Zappa will use that bucket instead of creating one for you.

Send download link to Amazon Bucket folder

I need to send someone a link to download a folder stored in an amazon S3 bucket. Is this possible?
You can do that using the AWS CLI
aws s3 sync s3://<bucket>/path/to/folder/ .
There are many options if you need to filter specific files etc ... check the doc page
You can also use Minio Client aka mc for this. It is open source and S3 compatible. mc policy command should do this for you.
Set bucket to "download" on Amazon S3 cloud storage.
$ mc policy download s3/your_bucket
This will add downloadable policy on all the objects inside bucket name your_bucket and an object with name yourobject
can be accessed with URL below.
https://your_bucket.s3.amazonaws.com/yourobject
Hope it helps.
Disclaimer: I work for Minio