AWS S3 Transfer acceleration status not alterable - amazon-web-services

I need to activate transfert accelerate on one of my buckets in S3, but I can't because of account limitation.
I've tried so far:
creating a user with IAM, gave him AdministratorAccess, create a bucket, enable transfer accelerate, got this (via the CLI):
An error occurred (AccessDenied) when calling the PutBucketAccelerateConfiguration operation: Access Denied
same thing via the console, same error.
same thing with the root account ( I guess with the root account I have all the permissions ).

still relevant?
if so, from the official docs:
Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands.
[adminuser]
aws_access_key_id = adminuser access key ID
aws_secret_access_key = adminuser secret access key
region = aws-region
does this work for you?
aws s3 ls --profile adminuser

Related

s3:listallbuckets works on aws console but not on cli

Context: I have configured right aws-access-key and aws-secret-key
I can see buckets contents on aws-console but on aws-cli
Here's my boto3 code
import boto3
# Enter the name of your S3 bucket here
bucket_name = 'xxxx'
# Enter the name of the region where your S3 bucket is located
region_name = 'ap-southeast-1'
# Create an S3 client
s3 = boto3.client('s3', region_name=region_name)
# List all the objects in the bucket
objects = s3.list_objects(Bucket=bucket_name)
# Print the names of all the objects in the bucket
for object in objects['Contents']:
print(object['Key'])
I have "s3:List*" under my AWS-policy. What am I missing?
I am trying list all buckets using aws-cli it works using aws-console but not cli. I have rechecked my aws-secret/acess key, everything is right.
EDIT: aws-cli throws error
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Usually this kind of error happens if you have multiple AWS profiles in your PC and you are using the wrong profile to make a call a call to AWS
Make sure the default profile is the profile that has access to the AWS account.
You can also use aws s3 ls --PROFILE_NAME to get the list of buckets if you have multiple profiles
Run aws sts get-caller identity to get the current caller identity
From ListBuckets - Amazon Simple Storage Service:
Returns a list of all buckets owned by the authenticated sender of the request. To use this operation, you must have the s3:ListAllMyBuckets permission.
The wording is a bit confusing, but:
ListBuckets returns a list of the names of S3 buckets in your AWS Account
ListObjects returns a list of objects in a particular S3 bucket
Your Python code is calling list_objects().
The AWS CLI error is saying ListBuckets operation: Access Denied, which suggests that you are trying to obtain a list of buckets, rather than a list of objects.

I deleted by mistake my AWS IAM user: how to recover?

Stupidly enough, I did delete by mistake my default AWS IAM user!
I used it for example do aws s3 sync...
Now the error I get is:
$ aws s3 sync build/ s3://mybucket.mydomain.com
fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.
Is there a way to recover?
I think I need instructions how to create a new user with the sufficient roles to enable my local aws cli to be able to do aws s3 sync ...
UPDATE: I did just create a new user on my AWS console, and added a policy (to start with) to list my bucket. The problem is I don't know how to attach my aws cli to that new user... :-(
If you are the only person using this AWS Account, then add the AdministratorAccess Policy to your IAM User. That will grant complete access.
Then, in the Security credentials tab of the IAM User click Create access key. Copy the Access Key and Secret Access Key.
On the command line, run aws configure and provide those keys to configure the user.
Test with: aws s3 ls

How do I use the IAM role assigned to an ec2 instance with boto3?

I'm trying to run a python script on an EC2 instance that is meant to upload files to s3. I have created an IAM role that has full permissions to access and alter that S3 bucket. If I run an aws cli command on the ec2 instance, it works fine, for example:
$ aws s3 ls my-bucket
2020-03-03 11:00:46 4100 myfile.json
But, when I try to use boto3 to write to the bucket, it fails:
session = boto3.session.Session()
s3_client = session.client('s3')
s3_client.upload_file(path, 'my-bucket', 'myfile.json')
I get this error:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
How do I tell boto3 to use the EC2 instance's IAM profile, which should give it permission?

`An error occurred (InvalidToken) when calling the ListBuckets operation: The provided token is malformed or otherwise invalid.` w/`aws s3 ls`

I successfully authenticate with 2 factor but when using aws s3 ls I keep getting
An error occurred (InvalidToken) when calling the ListBuckets operation: The provided token is malformed or otherwise invalid.
And I do have admin rights.
Issue was that I wasn't passing the --region in. e.g. aws s3 --region us-gov-west-1 ls. I suppose this could be set with an ENV variable too. That error message is a candidate for improvement.
This error also occurs when aws cli reads the aws_session_token and aws_security_token declared in the ~/.aws file, which might be associated to a previously used account. Removing both and leaving just the key and the credentials associated to the account where the bucket is will force aws to establish the connection.
Please delete .aws/credentials file from your users account and reconfigure your aws cli.
If you already associated with another account then there are high chances of this type of error.
Run aws configure
You may leave access key and access key id blank if you have an IAM role attached
Set value for 'region'
Now you will be able to successfully run 'aws s3 ls'
Else run 'aws s3 ls --region '
If you are using AWS Single Sign-on you can pass --profile <profile_name> and it should solve the issue
In the .aws credentials file remove session token and it will work

S3 cp AccessDenied from AWS cli with root keys

I have the AWS cli installed on an EC2 instance, and I configured it by running aws configure and giving it my AWSAccessKeyId and AWSSecretKey keys so if I run the command aws s3 ls it returns the name of my S3 bucket (call it "mybucket").
But, if I then try aws s3 cp localfolder/ s3://mybucket/ --recursive I get an error that looks like
A client error (AccessDenied) occurred when calling the CreateMultipartUpload operation: Anonymous users cannot initiate multipart uploads. Please authenticate.
I thought that by running aws configure and giving it my root key that I was effectively giving the aws cli everything it needs to authenticate? Is there something I am missing regarding copying to an S3 bucket as opposed to listing them?
Thought I would add in a very similar issue that I had where I could list buckets but could not write to a given bucket returning the error
An error occurred (AccessDenied) when calling the
CreateMultipartUpload operation: Access Denied
If the bucket uses server-side encryption you'll need to add the --sse flag to be able to write to this bucket.
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Root Access keys and Secret key have full control and full privileges to interact with the AWS. Please try running the aws configure again to recheck the setting and try again.
PS: it is highly not recommended to use root access keys - please give a thought is creating an IAM ( which take admin privileges- like root ) and use those.
If you have environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_REGION set, AWS CLI gives higher precedence to them, and not to credentials you specify with aws configure.
So, in my case, bash command unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY solved the problem.