In the AWS S3 Console, you can grant permission to files for "Any Authenticated AWS User" and give "Open/Download" permissions.
What is the CLI command to do the same?
I already know the cp command (for uploading):
aws s3 cp filename s3://bucket/folder/filename
But, I can't figure out the --grants configuration and the documentation is not specific with the accepted values.
Bonus if you can provide the rest of the accepted values for the --grants flag (e.g. View Permissions, Edit Permissions)?
Can this be done recursively?
EDIT 1
I've found the following, however it makes the file available to EVERYONE (public). So, where is the URI for my groups? I'm assuming it's not the same as the group's ARN.
aws s3 cp file.txt s3://my-bucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers
We can alternatively do:
aws s3 cp file.txt s3://my-bucket/ --grants full=emailaddress=user#example.com
That will provide full rights to the account associated with that email address.
Still nowhere to mirror "Any Authenticated AWS User" (I am assuming this is authenticated within my account).
You should use the --acl parameter to get the canned permissions:
aws s3 cp local.txt s3://some-bucket/remote.txt --acl authenticated-read
The documentation for aws s3 cp describes what the shorthand syntax is for the CLI:
--acl (string) Sets the ACL for the object when the command is performed. If you use this parameter you must have the
"s3:PutObjectAcl" permission included in the list of actions for your
IAM policy. Only accepts values of private, public-read,
public-read-write, authenticated-read, bucket-owner-read,
bucket-owner-full-control and log-delivery-write. See Canned ACL
for details
--grants appears to allow fine-tuned custom ACLs, but the syntax is more complicated, as you discovered.
Check this link: Using High-Level s3 Commands with the AWS Command Line Interface
You may want to do:
aws s3 cp filename s3://bucket/folder/filename --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers
For recursive, use --recursive option
When the --recursive option is used on a directory/folder with cp, mv,
or rm, the command walks the directory tree, including all
subdirectories. These commands also accept the --exclude, --include,
and --acl options as the sync command does.
Related
I have an s3 bucket with millions of files copied there by a Java Process I do not control. Java Process is executed in a EC2 in "AWS Account A" but writes to a bucket owned by "AWS Account B". B was able to see the files but not to open them.
I figured out what was the problem and requested a change in Java Process to write new files with "acl = bucket-owner-full-control"... and it works! New files can be read from "AWS Account B".
But my problem is that I still have millions of files with incorrect acl. I can fix one of the old files easily with
aws s3api put-object-acl --bucket bucketFromAWSAccountA--key datastore/file0000001.txt --acl bucket-owner-full-control
What is the best way to do that?
I was thinking in something like
# Copy to TEMP folder
aws s3 sync s3://bucketFromAWSAccountA/datastore/ s3://bucketFromAWSAccountA/datastoreTEMP/ --recursive --acl bucket-owner-full-control
# Delete original store
aws s3 rm s3://bucketFromAWSAccountA/datastore/
# Sync it back to original folder
aws s3 sync s3://bucketFromAWSAccountA/datastoreTEMP/ s3://bucketFromAWSAccountA/datastore/ --recursive --acl bucket-owner-full-control
But it is going to be very time consuming. I wonder if...
is there a better way to achieve this?
Could this be achieved easily from bucket level? I mean some change "put-bucket-acl" that allows owner to ready everything?
One option seems to be to recursively copy all objects in the bucket over themselves, specifying the ACL change to make.
Something like:
aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE
That code snippet was taken from this answer: https://stackoverflow.com/a/63804619
It is worth reviewing the other options presented in answers to that question, as it looks like there is a possibility for losing content-type tags or metadata information if you don't form the command properly.
I am new to the aws cli and I've spent a fair amount of time in the documentation but I can't figure out how to set permissions on files after I've uploaded them. So if I uploaded a file with:
aws s3 cp assets/js/d3-4.3.0.js s3://example.example.com/assets/js/
and didn't set access permissions, I need a way to set them. Is there an equivalent to chmod 644 in the aws cli?
And for that matter is there a way to view access permission?
I know I could use the --acl public-read flag with aws s3 cp but if I didn't, can I set access without repeating the full copy command?
The awscli supports two groups of S3 actions: s3 and s3api.
You can use aws s3api put-object-acl to set the ACL permissions on an existing object.
The logic behind there being two sets of actions is as follows:
s3: high-level abstractions with file system-like features such as ls, cp, sync
s3api: one-to-one with the low-level S3 APIs such as put-object, head-bucket
In your case, the command to execute is:
aws s3api put-object-acl --bucket example.example.com --key assets/js/d3-4.3.0.js --acl public-read
I need to get the contents of one S3 bucket into another S3 bucket.
The buckets are in two different accounts.
I was told not to create a policy to allow access to the destination bucket from the origin bucket.
Using the AWS CLI how can I download all the contents of the origin bucket, and then upload the contents to the destination bucket?
To copy locally
aws s3 sync s3://origin /local/path
To copy to destination bucket:
aws s3 sync /local/path s3://destination
The aws cli allows you to configure named profiles which lets you use a different set of credentials for each individual cli command. This will be helpful because your buckets are in different accounts.
To create your named profiles you'll need to make sure you already have IAM users in each of your accounts and each user will need a set of access keys. Create your two named profiles like this.
aws configure --profile profile1
aws configure --profile profile2
Each of those commands will ask you for your access keys and a default region to use. Once you have your two profiles, use the aws cli like this.
aws s3 cp s3://origin /local/path --recursive --profile profile1
aws s3 cp /local/path s3://destination --recursive --profile profile2
Notice that you can use the --profile parameter to tell the cli which set of credentials to use for each command.
I need to download files recursively from a s3 bucket. The s3 bucket lets anonymous access.
How to list files and download them without providing AWS Access Key using an anonymous user?
My command is:
aws s3 cp s3://anonymous#big-data-benchmark/pavlo/text/tiny/rankings/uservisits uservisit --region us-east --recursive
The aws compains that:
Unable to locate credentials. You can configure credentials by running "aws configure"
You can use no-sign-request option
aws s3 cp s3://anonymous#big-data-benchmark/pavlo/text/tiny/rankings/uservisits uservisit --region us-east --recursive --no-sign-request
you probably have to provide an access keys and secret key, even if you're doing anonymous access. don't see an option for anonymous for the AWS cli.
another way to do this, it to hit the http endpoint and grab the files that way.
In your case: http://big-data-benchmark.s3.amazonaws.com
You will get and XML listing all the keys in the bucket. You can extract the keys and issues requests for each. Not the fastest thing out there but it will get the job done.
For example: http://big-data-benchmark.s3.amazonaws.com/pavlo/sequence-snappy/5nodes/crawl/000741_0
for getting the files curl should be enough. for parsing the xml depending on what you like you can go as lo-level as sed and as high-level as a proper language.
hope this helps.
In my amazon EC2 instance, I have a folder named uploads. In this folder I have 1000 images. Now I want to copy all images to my new S3 bucket. How can I do this?
First Option sm3cmd
Use s3cmd
s3cmd get s3://AWS_S3_Bucket/dir/file
Take a look at this s3cmd documentation
if you are on linux, run this on the command line:
sudo apt-get install s3cmd
or Centos, Fedore.
yum install s3cmd
Example of usage:
s3cmd put my.file s3://pactsRamun/folderExample/fileExample
Second Option
Using Cli from amazon
Update
Like #tedder42 said in the comments, instead of using cp, use sync.
Take a look at the following syntax:
aws s3 sync <source> <target> [--options]
Example:
aws s3 sync . s3://my-bucket/MyFolder
More information and examples available at Managing Objects Using High-Level s3 Commands with the AWS Command Line Interface
aws s3 sync your-dir-name s3://your-s3-bucket-name/folder-name
Important: This will copy each item in your named directory into the s3 bucket folder you selected. This will not copy your directory as a whole.
Or, you can use the following command for one selected file.
aws s3 sync your-dir-name/file-name s3://your-s3-bucket-name/folder-name/file-name
Or you can use a wild character to select all. Note that this will copy your directory as a whole and also generate metadata and save them to your s3 bucket folder.
aws s3 sync . s3://your-s3-bucket-name/folder-name
To copy from EC2 to S3 use the below code in the Command line of EC2.
First, you have to give "IAM Role with full s3 Access" to your EC2 instance.
aws s3 cp Your_Ec2_Folder s3://Your_S3_bucket/Your_folder --recursive
Also note on aws cli syncing with s3 it is multithreaded and uploads multiple parts of a file at one time. The number of threads however, is not configurable at this time.
aws s3 mv /home/inbound/ s3://test/ --recursive --region us-west-2
This can be done very simply. Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Hence you don't need to install or do any extra efforts.
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
We do have a dryrun feature available for testing.
To begin with I would assign ec2-instance a role to be able read
write to S3
SSH into the instance and perform the following
vi tmp1.txt
aws s3 mv ./ s3://bucketname-bucketurl.com/ --dryrun
If this works then all you have to do is either create a script to
upload all files with specific from this folder to s3 bucket
I have done the wrritten the following command in my script to move
files older than 2 minutes from current directory to bucket/folder
cd dir; ls . -rt | xargs -I FILES find FILES -maxdepth 1 -name
'*.txt' -mmin +2 -exec aws s3 mv '{}' s3://bucketurl.com