Access Denied s3cmd from an EC2 machine - amazon-web-services

I'm trying to use a log rotation configuration for my nginx server that I'm using as a reverse proxy machine located on an EC2 Ubuntu instance.
I want to store those logs on a S3 bucket after a rotation but I'm only getting "access denied, are you sure you keys have ListAllMyBuckets permissions errors" when I'm trying to configure s3cmd tools.
I'm pretty sure that my credentials is correctly configured at IAM, tried at least five different credentials (even the root cred) with the same result. It works fine to list all of my buckets from my local computer with aws cli tools with the same credentials so it puzzles me that I don't have any access just on my EC2 instance.
this is what I run:
which s3cmd
/usr/local/bin/s3cmd
s3cmd --configure --debug
Access Key: **************
Secret Key: *******************************
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
and this is the result
...
DEBUG: ConnMan.put(): connection put back to pool (http://s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: x-amz-id-2: nMI8DF+............
DEBUG: HttpHeader: server: AmazonS3
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: x-amz-request-id: 5912737605BB776C
DEBUG: HttpHeader: date: Wed, 23 Apr 2014 13:16:53 GMT
DEBUG: HttpHeader: content-type: application/xml
DEBUG: ErrorXML: Code: 'AccessDenied'
DEBUG: ErrorXML: Message: 'Access Denied'
DEBUG: ErrorXML: RequestId: '5912737605BB776C'
DEBUG: ErrorXML: HostId: 'nMI8DF+............
ERROR: Test failed: 403 (AccessDenied): Access Denied
ERROR: Are you sure your keys have ListAllMyBuckets permissions?
The only thing that is in front of my nginx server is a load balancer, but I can't see why it could interfere with my request.
Could it be something else that I've missed?

Please check That IAM user permission which keys you are using
Steps would be
AWS console go to IAM panel
IAM user > Select that User > in the bottom menu 2nd tab is
permission
attach a user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::YOU-Bucket-Name"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::YOU-Bucket-Name/*"
}
]
}
Let me know how it goes

Please dont trust the --configure switch:
i was facing the same problem.
it was showing 403 in --configure but at the end i saved the Settings and then tried:
ERROR: Test failed: 403 (AccessDenied): Access Denied
Retry configuration? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
# s3cmd put MyFile s3://MyBucket/
& it worked..

s3cmd creates a file called .s3cfg in your home directory when you set this up. I would make sure you put this file somewhere where your logrotate script can read this, and use the -c flag.
For example to upload the logfile.txt file to the logbucket bucket:
/usr/local/bin/s3cmd -c /home/ubuntu/.s3cfg put logfile.txt s3://logbucket

What is the version of s3cmd you are using?
I tried it using s3cmd 1.1, it seems s3cmd 1.1 does not work with IAM roles.
But someone says s3cmd 1.5 alpha2 has support for IAM roles.(http://t1983.file-systems-s3-s3tools.file-systemstalk.info/s3cmd-1-5-0-alpha2-iam-roles-supportincluded-t1983.html)
I have tried s3cmd 1.5 beta1(https://github.com/s3tools/s3cmd/archive/v1.5.0-beta1.tar.gz), it works fine with IAM roles.
So there are two ways to access s3 bucket of s3cmd:
Using access key and secret key
`
you need to set a config file in /root/.s3cfg(default path) as bellow
access_key=xxxxxxxx
secret_key=xxxxxxxxxxxxxxxxxxxx
Note that just set above two key-value in .s3cfg, no need other keys.
`
Using IAM add s3 policy with s3cmd > 1.5 alph2.
`
you need add a IAM to ec2 instance, this role may has a policy as bellow
{
"Effect": "Allow",
"Action": [
"s3:"
],
"Resource": ""
}
`

I found out a solution for my problems by deleting all installation of s3cmd. Then made sure that apt-get was up to date and installing it from apt-get again. After my configuration (the same as before) it worked out just fine!

I also had a similar problem. Even after associating my EC2 instance to an IAM role with s3 full access policy, my s3cmd was failing as there wasn't any .s3cfg file in it. I fixed by updating the version of my s3cmd.
sudo pip install s3cmd==1.6.1
Did the trick!

Related

AWS Lightsail can't update Route53 for Let'sEncrypt certs

I'm trying to get Let's Encrypt automatic cert update on a AWS Lightsail wordpress instance and Route53.
I used these official instructions for adding a SSL certs to a AWS Lightsail wordpress website.
Site SSL is working fine, but I was looking for a way to automate the re-issue and found the certbot plugin - certbot-dns-route53
I created a separate AWS non-admin user just for the updates, and added the policy as suggested by the certbot official docs
{
"Version": "2012-10-17",
"Id": "certbot-dns-route53 sample policy",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:GetChange"
],
"Resource": [
"*"
]
},
{
"Effect" : "Allow",
"Action" : [
"route53:ChangeResourceRecordSets"
],
"Resource" : [
"arn:aws:route53:::hostedzone/MYZONEID"
]
}
]
}
I placed the API access information in both a environment variable and ~/.aws.config file.
I executed the command -
sudo certbot certonly --dns-route53 --dns-route53-propagation-seconds 30 --dry-run -d 'domain.
com,*.domain.com'
And I get the following error -
An error occurred (AccessDenied) when calling the ListHostedZones
operation: User: arn:aws:sts::548507530525:assumed-role/Amaz
onLightsailInstanceRole/i-00ff79ff762ac0576 is not authorized to
perform: route53:ListHostedZones To use certbot-dns-route53, configure
credentials as described at
https://boto3.readthedocs.io/en/latest/guide/configuration.h
tml#best-practices-for-configuring-credentials and add the necessary
permissions for Route53 access
I attempted a ~.aws/config & credentials file as well -
config-
[profile cross-account]
role_arn=arn:aws:iam::XXXXXXXXXXX:user/domain_cert_update
source_profile=default
credentials -
[default]
aws_access_key_id=ACCESSKEY
aws_secret_access_key=SECRETKEYHERE
I'm not sure how to get the lightsail instance of /i-00ff79ff762ac0576 assigned to the policy correctly. I've read through the config guide it links and it doesn't help.
Better to use https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-enabling-https-on-wordpress
If you use Certbot, you should renew license manually. But use the above tutorial and you can setup HTTPS by using bncert tool.
I have setup HTTPS and Redirect HTTP to HTTPS on 2 of my websites using the above documentation.
Thanks for the question, it pointed me in the right direction to get this working myself.
I created the AWS CLI config files in the home directory at /home/bitnami/.aws. For me the issue was certbot wasn't actually finding these config files as it was looking in the root directory instead of the home directory.
I added a link using the root user:
sudo -s
ln -s /home/bitnami/.aws/ ~/.aws
After that certbot could find the config files and the command was successful.

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

fatal error: An error occurred (403) when calling the ListObjects operation: Forbidden | Resuming an aws s3 sync

So I have a storreduce appliance and was running a sync job which died half-way through
aws s3 sync /jobs/ads/ --endpoint-url=http://data:5080 s3://chicago-stagin
The problem now is that I can't resume the job or resubmit the sync, I get the following running the above command
fatal error: An error occurred (403) when calling the ListObjects operation: Forbidden
I can cp a file up to the bucket no problem
aws s3 cp /var/log/messages --endpoint-url=http://data:5080 s3://chicago-staging
upload: ./messages to s3://chicago-staging/messages
There are no security policies on the bucket and I can read files too;
aws s3 ls --endpoint-url=http://data.chi.themill.com:5080 s3://chicago-staging
PRE .ARCHIVE/
PRE .common/
Any ideas?
*UPDATE:
Fixed by adding this into ~/.aws/config
[default]
s3 =
signature_version = s3
if you added the permissions already and still not working you must run the following command in your terminal
aws configure
it will ask you for id and secret you can skip the other question once done it should work fine
Make sure you have given ListBucket permission to the chicago-staging bucket itself.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "<id>",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::chicago-staging",
"arn:aws:s3:::chicago-staging/*"
]
}
]
}

AWS CodeCommit cross-account respository access not working

I have an AWS IAM user in my account (let's call it originAccount) that has access to another account (targetAccount) and I'm trying to clone a CodeCommit repository that exists in targetAccount using my originAccount credentials on my Windows machine.
I can log in and switch roles to targetAccount just fine, that's how I created the repository in the first place. I have full access to targetAccount except for billing. I do have MFA enabled on my IAM user. I have tried turning this off temporarily but it didn't help. However, with MFA turned off, I can do aws s3 ls successfully without error for targetAccount.
Neither SSH nor HTTPS work. I can clone it with static credentials, as a test, but that isn't acceptable long term. I'm amazed at how difficult this stuff is on AWS...
My user in originAccount has this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::000000000000:role/Administrator"
]
}
]
}
The Administrator role has access to everything. targetAccount has this trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:user/MyUser"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
I tried removing the MFA bit; didn't help. I even disabled MFA on my user account and that didn't help either.
My .aws\credentials file contains these lines:
[default]
aws_access_key_id = [originAccountKey]
aws_secret_access_key = [originAccountSecret]
[targetAccount]
aws_access_key_id = [originAccountKey]
aws_secret_access_key = [originAccountSecret]
I am using the environment variable to set the profile to use e.g.:
set AWS_DEFAULT_PROFILE=targetAccount
My .gitconfig contains:
[credential "https://git-codecommit.us-east-1.amazonaws.com"]
helper = !'C:\\path\\to\\git-credential-AWSSV4.exe' --profile='targetAccount'
UseHttpPath = true
Originally that was using the default profile but that didn't work either.
Questions:
Is this possible? What am I doing wrong?
Can both SSH and HTTPS work for cross-account access? The docs are confusing but it seems to suggest that only HTTPS with the credential helper will work (allegedly).
Will this work with multi-factor authentication? If not, can that be turned off only for CodeCommit?
Unfortunately none of the other questions I found had answers that worked for me...
I uninstalled and reinstalled Git for Windows just to be sure the credential manager for that wasn't installed (I couldn't remember), but it still doesn't work, says repository '...' not found. I can clone repositories in originAccount over HTTPS though.
The credential-helper is the only way to authenticate with CodeCommit using temporary session credentials like the ones acquired from AssumeRole. SSH will not work as SSH authentication is done by verifying an uploaded public key, which is not temporary.
I find that the following pattern is an easy one to follow for authentication using AssumeRole. In your .aws\credentials file:
[profile target_account]
role_arn = arn:aws:iam::000000000000:role/Administrator
mfa_serial = arn:aws:iam::000000000000:mfa/MFA_DEVICE
source_profile = origin_account
[profile origin_account]
aws_access_key_id = [originAccountKey]
aws_secret_access_key = [originAccountSecret]
This will allow AWS client tools to get temporary session credentials from target_account by using AssumeRole from the origin_account, effectively assuming the role of Administrator on target_account.
Your .gitconfig should specify the target_account profile.
If you are using msysgit, you should try upgrading to Git for Windows 2.x. The credential-helper will send : as the username and a AWS V4 Signature as the password. Session keys are usually pretty long and in msysgit, curl will truncate the username to 256 characters which will not include the complete session key.
I hope this helps!

cant upload files to s3 with awscli - access denied

Here's the step's I've taken:
create a s3 bucket, copy the permission policy for public reads (see below)
Enable static web hosting and set the root to index.html (which hasn't been uploaded yet)
Try and use the web interface to upload a folder, but it's not supported on Linux
run awscli configure and enter my access token, secret token, region
edit ~/.aws/config and add signature_version = s3v4 (this is to avoid an error if I leave this out)
Try aws s3 sync . s3://my-music
See this error:
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Completed 1 part(s) with ... file(s) remaining
With no other info.
the bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-music/*"
}
]
}
I have to say I don't know much about policy / iam on aws. I really just want to upload a static website and visit it, shouldn't be too hard right? Except the website has a lot of files, and I need to bulk-upload them somehow. If it makes any difference, I did create a iam user and that is the credentials I am using for access / secret token.
I had failed to attach a policy to my new IAM user.
I visited https://console.aws.amazon.com/iam/home
and clicked the new user, this brought up a button to attach a policy.
I added the first policy there ("administrator access"), and that was all I needed to do.