We are asked to upload a file to client's S3 bucket; however, we do not have AWS account (nor we plan on getting one). What is the easiest way for the client to grant us access to their S3 bucket?
My recommendation would be for your client to create an IAM user for you that is used for the upload. Then, you will need to install the AWS cli. On your client's side there will be a user that the only permission they have is to write to their bucket. This can be done pretty simply and will look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::the-bucket-name/*",
"arn:aws:s3:::the-bucket-name"
]
}
]
}
I have not thoroughly tested the above permissions!
Then, on your side, after you install the AWS cli you need to have two files. They both live in the home directory of the user that runs your script. The first is $HOME/.aws/config. This has something like:
[default]
output=json
region=us-west-2
You will need to ask them what AWS region the bucket is in. Next is $HOME/.aws/credentials. This will contain something like:
[default]
aws_access_key_id=the-access-key
aws_secret_access_key=the-secret-key-they-give-you
They must give you the region, the access key, the secret key, and the bucket name. With all of this you can now run something like:
aws s3 cp local-file-name.ext s3://the-client-bucket/destination-file-name.ext
This will transfer the local file local-file-name.ext to the bucket the-client-bucket with the file name there of destination-file-name.ext. They may have a different path in the bucket.
To recap:
Client creates an IAM user that has very limited permission. Only API permission is needed, not console.
You install the AWS CLI
Client gives you the access key and secret key.
You configure the machine that does the transfers with the credentials
You can now push files to the bucket.
You do not need an AWS account to do this.
Related
I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.
I have an S3 bucket and inside it I have some folders but the objects inside folders are going to be created dynamically so in simple terms I can say it like this:
main_users/someIDNumber/uploaded
someIDNumber is dynamic and it is different every time a user is created.
Now I want to give GetObject permission to all objects inside "uploaded" folder just for all users to a specific refer which is my website.
I have tried this in my bucket policies but it doesn't work:
arn:aws:s3:::mybucketname/main_users/*/uploaded/*
also this:
arn:aws:s3:::mybucketname/main_users/*/uploaded
But I get access denied on my website side.
How can I do it?
It worked for me. I did the following:
Uploaded a file to: s3://my-bucket/main_users/42/uploaded/foo.txt
Created a stack IAM user with the policy shown below
Ran aws s3 cp s3://my-bucket/main_users/42/uploaded/foo.txt . --profile stack
The file copied successfully
The policy was:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/main_users/*/uploaded/*"
}
]
}
It failed when I tried to copy a file with:
aws s3 cp s3://my-bucket/main_users/24/something/foo.txt . --profile stack
Please note that if you are trying to list (ls) a folder, you will need a different policy. The above was a test of GetObject, not of listing a bucket.
While I put these policies on a specific IAM user, it should work the same in a Bucket Policy. Just make sure that you have edited S3 Block Public Access to enable the content of the bucket to be publicly accessible.
I'm using S3 to store my backups. I'm doing that via aws s3 and cron.
And the moment my bucket is public.
I want to setup ACL in such a way that only I may do CRUD + ListAll operations on it and that's it. I've read their documentation but it's too complicated whereas I need a simple thing. How can I do this?
my bash script on my VPS server should have access to the S3 bucket via API; probably there must also be a restriction by IP
I should have to my bucket via web console/S3 website from any place and any IP
the bucket shouldn't be accessible for no one else
You can create an S3 bucket policy so that only your PUBLIC IP address can access your bucket.
Here is an example policy. Change the bucket name to your bucket name. Change the IP address to your public IP address:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Principal": "*",
"Action":["s3:*"] ,
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition" : {
"IpAddress" : {
"aws:SourceIp": "192.168.143.0/32"
}
}
}
]
}
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Select the bucket that you want to attach the above policy.
Choose Permissions.
Choose Edit Bucket Policy.
Copy the above policy into the Bucket Policy Editor window.
Substitute the values (bucket name, IP address) in the bucket policy.
Choose Save and then Close.
You're using the AWS command line tool (awscli) to sync files to S3. You are presumably supplying credentials to awscli (very few people use the awscli unauthenticated).
So, assuming that you are authenticated, why have you made your S3 bucket public? The correct thing to do here is to ensure that the credentials you are using are associated with an IAM policy that allows you to access the S3 bucket. Then remove the S3 bucket policy.
If, for some reason, you really do want to be able to access the S3 bucket without authentication (not a good idea, generally) then you can make things somewhat safer by applying a bucket policy allowing that level of access from only your IP address.
Hope this helps.
I've been reading multiple posts like this one about how to transfer data with aws cli from one S3 bucket to another using different accounts but I am still unable to do so. I'm sure it's because I haven't fully grasp the concepts of account + permission settings in AWS yet (e.g. iam account vs access key).
I have a vendor that gave me a user called "Foo" and account number "123456789012" with 2 access keys to access their S3 bucket "SourceBucket" in eu-central-1. I created a profile on my machine with the access key provided by the vendor called "sourceProfile". I have my S3 called "DestinationBucket" in us-east-1 and I set the bucket policy to the following.
{
"Version": "2012-10-17",
"Id": "Policy12345678901234",
"Statement": [
{
"Sid": "Stmt1487222222222",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Foo"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::DestinationBucket/",
"arn:aws:s3:::DestinationBucket/*"
]
}
]
}
Here comes the weird part. I am able to list the files and even download files from the "DestinationBucket" using the following command lines.
aws s3 ls s3://DestinationBucket --profile sourceProfile
aws s3 cp s3://DestinationBucket/test ./ --profile sourceProfile
But when I try to put copy anything to the "DestinationBucket" using the profile, I got Access Denied error.
aws s3 cp test s3://DestinationBucket --profile sourceProfile --region us-east-1
upload failed: ./test to s3://DestinationBucket/test An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Did I set up the bucket policy especially the list of action right? How could ls and cp from destination to local work but cp from local to destination bucket doesn't work?
Because AWS make it a way that parent account holder must do the delegation.
Actually, beside delegates access on to that particular access key user, you can choose to do replication on the bucket as stated here.
I have a bucket on s3, and a user given full access to that bucket.
I can perform an ls command and see the files in the bucket, but downloading them fails with:
A client error (403) occurred when calling the HeadObject operation: Forbidden
I also attempted this with a user granted full S3 permissions through the IAM console. Same problem.
For reference, here is the IAM policy I have:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}
]
}
I also tried adding a bucket policy, even making the bucket public, and still no go...also, from the console, I tried to set individual permissions on the files in the bucket, and got an error saying I cannot view the bucket, which is strange, since I was viewing it from the console when the message appeared, and can ls anything in the bucket.
EDIT the files in my bucket were copied there from another bucket belonging to a different account, using credentials from my account. May or may not be relevant...
2nd EDIT just tried to upload, download and copy my own files to and from this bucket from other buckets, and it works fine. The issue is specifically with the files placed there from another account's bucket.
Thanks!
I think you need to make sure that the permissions are applied to objects when moving/copying them between buckets with the "bucket-owner-full-control" acl.
Here are the details about how to do this when moving or copying files as well as retroactively:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
Also, you can read about the various predefined grants here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
The problem here stems from how you get the files into the bucket. Specifically the credentials you have and/or privileges you grant at the time of upload. I ran into a similar permissions issue issue when I had multiple AWS accounts, even though my bucket policy was quite open (as yours is here). I had accidentally used credentials from one account (call it A1) when uploading to a bucket owned by a different account (A2). Because of this A1 kept the permissions on the object and the bucket owner did not get them. There are at least 3 possible ways to fix this in this scenario at time of upload:
Switch accounts. Run $export AWS_DEFAULT_PROFILE=A2 or, for a more permanent change, go modify ~/.aws/credentials and ~/.aws/config to move the correct credentials and configuration under [default]. Then re-upload.
Specify the other profile at time of upload: aws s3 cp foo s3://mybucket --profile A2
Open up the permissions to bucket owner (doesn't require changing profiles): aws s3 cp foo s3://mybucket --acl bucket-owner-full-control
Note that the first two ways involve having a separate AWS profile. If you want to keep two sets of account credentials available to you, this is the way to go. You can set up a profile with your keys, region etc by doing aws configure --profile Foo. See here for more info on Named Profiles.
There are also slightly more involved ways to do this retroactively (post upload) which you can read about here.
To correctly set the appropriate permissions for newly added files, add this bucket policy:
[...]
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012::user/their-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
Your bucket policy is even more open, so that's not what's blocking you.
However, the uploader needs to set the ACL for newly created files. Python example:
import boto3
client = boto3.client('s3')
local_file_path = '/home/me/data.csv'
bucket_name = 'my-bucket'
bucket_file_path = 'exports/data.csv'
client.upload_file(
local_file_path,
bucket_name,
bucket_file_path,
ExtraArgs={'ACL':'bucket-owner-full-control'}
)
source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)