Grant S3 access to EC2 instance (simplest case) - amazon-web-services

I tried with simplest case following AWS documentation. I created role, assigned to instance and rebooted instance. To test access interactively, I logon to Windows instance and run aws s3api list-objects --bucket testbucket. I get error An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied.
Next test was to create .aws/credentials file and add profile to assume role. I modified role (assigned to instance) and added permission to assume role by any user in account. When I run same command aws s3api list-objects --bucket testbucket --profile assume_role, objects in bucket are listed.
Here is my test role Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ssm.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
},
{
"Sid": "UserCanAssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": "sts:AssumeRole"
}
]
}
Role has only one permission "AmazonS3FullAccess".
When I switch role in AWS console, I can see content of S3 bucket (and no other action is allowed in AWS console).
My assumption is that EC2 instance does not assume role.
How to pinpoint where is the problem?

Problem was with Windows proxy.
I checked proxy environment variables. None was set. When I checked Control Panel->Internet options I saw that Proxy text box shows value of proxy, but checkbox "Use proxy" was not checked. Next to it was text "Some of your settings are managed by organization." Skip proxy was having 169.254.169.254 listed.
I run command in debug mode and saw that CLI connects to proxy. Which cannot access 169.254.169.254 and no credentials are set. When I explicitly set environment variable set NO_PROXY=169.254.169.254 everything started to work.
Why AWS CLI uses proxy from Windows system I do not understand. Worst of all, it uses proxy but does not check bypass proxy. Lesson learned. Run command in debug mode and verify output.

Related

"An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied" when using batch jobs

I have a compute environment with 'ecsInstanceRole'. It contains the policies 'AmazonS3FullAccess' and 'AmazonEC2ContainerServiceforEC2Role'
Since I am using the AmazonS3FullAccess policy, I assume the batch job has permission to list, copy, put etc.
-The image I am using is a custom docker image that has a startup script which uses "aws s3 ls <S3_bucket_URL>"
When I start this image on an EC2 instance, it runs fine and lists the contents of the bucket
when I do the same as a batch job, I get the access denied error seen above.
I dont understand how this is happening.
Things I have tried so far:
having the bucket policy as
.
{
"Version": "2012-10-17",
"Id": "Policy1546414123454",
"Statement": [
{
"Sid": "Stmt1546414471931",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account Id>:root"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::"bucketname",
"arn:aws:s3:::bucketname/*"
]
}
]
}
Granted public access to the bucket
Quoting the reply from #JohnRotenstein because I cannot mark it as an answer.
"If you are using IAM Roles, there is no need for a Bucket Policy. (Also, there is a small typo in that policy, before bucketname but I presume that was due to a Copy & Paste error.) It would appear that a role has not been assigned to your ECS task: IAM Roles for Tasks - Amazon Elastic Container Service"
Solution: I had toattach an S3 access policy to my current Job Role.

How to access AWS ECR from another account's EC2 instance?

I have two accounts, a1 and a2.
I have an EC2 instance in a1, a1.ec2. It assumes some role in that account, a1.r. This role has full access to all ECR actions.
Now, I have an image registry (ECR) in a2 and would like to be able to access it from a1.ec2.
So, I ssh into that instance and in order to test the access I run
aws ecr describe-repositories --region <my-region> --registry-id <id of a2>
But I get the error
An error occurred (AccessDeniedException) when calling the DescribeRepositories operation: User: arn:aws:sts::<id of a1>:assumed-role/a1.r/i-075fad654b998275c is not authorized to perform: ecr:DescribeRepositories on resource: arn:aws:ecr:*:*:repository/*
However, this permission is indeed granted to the role a1.r. I verified this by being able to access an ECR in a1 just fine.
Also, the ECR I like to access has the following permission policies, so I make sure that the trouble is not caused by the ECR of a2:
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a1>:root"
},
"Action": "*"
},
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a1>:role/a1.r"
},
"Action": "*"
}
I had a look at https://serverfault.com/questions/897392/ecr-cross-account-pull-permissions where the solution appears to be to create cross-account roles. Although I could create such a role a2.cross-acc-r, I cannot figure out how I can assume that role for the the aws ecr cli commands. I do not want the EC2 instance to assume that role, as it resides in a different account (not even sure if that is possible at all).
Am I lacking something basic regarding how AWS IAM works?
If you want to pull and push images from one account's EC2 instance into another account's ECR, and do not need the full aws ecr CLI functionality, you can do so through docker.
For example, if you want your Jenkins to push built images into ECRs based on the targeted environment (production, staging) residing in different AWS accounts.
Doing so via docker is documented at https://aws.amazon.com/premiumsupport/knowledge-center/secondary-account-access-ecr/
Put simply, in the ECR repository, you grant the other account the needed permissions.
Then you get a temporary authentication token to authorize docker towards ECR via:
$(aws ecr get-login --registry-ids <account ID> --region <your region> --no-include-email)
After this, you can use docker pull and docker push to access it.
I had a look at https://serverfault.com/questions/897392/ecr-cross-account-pull-permissions where the solution appears to be to create cross-account roles. Although I could create such a role a2.cross-acc-r, I cannot figure out how I can assume that role for the aws ecr CLI commands. I do not want the EC2 instance to assume that role, as it resides in a different account (not even sure if that is possible at all).
You can do that by following the steps below:
In account A, I created a role (e.g RoleForB) to trust account B, and attach to the before created role an IAM policy to allow it to perform some read operations in the account A. e.g ReadOnlyAccess
In account B, I created a role (e.g AssumeRoleInA) and attach a policy to allow it to assume the role that is created in account A.
In account B Associate to your EC2 instance ec2-profile the IAM role (AssumeRoleInA) which was created in step 2.
In account B login into this EC2 instance to assume the role in Account A using the command aws sts assume-role --role-arn "arn:aws:iam::Account_A_ID:role/RoleForB" --role-session-name "EC2FromB".
In account B EC2 terminal when the command is step 4. finished, you can see the access key ID, secret access key, and session token from wherever you've routed it, in our case stdout either manually or by using a script. You can then assign these values to environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN)
So Let’s check the configurations mentioned above step by step but with some mode detail:
As before presented in account A, it builds the trust to account B by creating the role named RoleForB and attaching ReadOnlyAccess permission to it.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::Account_B_ID:root"},
"Action": "sts:AssumeRole"
}
}
In account B, create a role named AssumeRoleInA then attach the corresponding policy to allow it to assume the role named RoleForB in account A.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::Account_A_ID:role/RoleForB"
]
}
]
}
In account B, create a new EC2 instance (if it does not exists yet), and associate it's ec2-profile with the IAM role named AssumeRoleInA.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
In account B login into this EC2 instance to assume the role in Account A using the command:
aws sts assume-role --role-arn "arn:aws:iam::Account_A_ID:role/RoleForB" --role-session-name "EC2FromB"`
You need to setup a trust relationship between your account a1 and a2.
From your a2 Console, go to IAM service, create a new role:
1) Trusted Entity: Another AWS Account (input account a1's ID)
2) Policy: AmazonEC2ContainerRegistryPowerUser (or others that meet your requirement)
From your a2 Console, go to ECR service, you need to edit your permission:
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a1>:root"
},
"Action": "*"
},
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a2>:role/a2.r"
},
"Action": "*"
}
}

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

Access s3 bucket from different aws account

I am trying to restore a database as a part of our testing. The backups exists on the prod s3 account. My database is running as ec2 instance in dev account.
Can anyone tell me how can i access the prod s3 from dev account.
Steps:
- i created a role on prod account and with trusted relationship with the dev account
- i added a policy to the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::prod"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::prod/*"
}
]
}
on dev account i created a role and with assume policy
> { "Version": "2012-10-17", "Statement": [
> {
> "Effect": "Allow",
> "Action": "sts:AssumeRole",
> "Resource": "arn:aws:iam::xxxxxxxxx:role/prod-role"
> } ] }
But i am unable to access the s3 bucket, can someone point me where i am wrong.
Also i added the above policy to an existing role. so does that mean its not working because of my instance profile ( inconsistent error)
Please help and correct me if i am wrong anywhere. I am looking for a solution in terms of a role and not as a user.
Thanks in advance!
So lets recap: you want to access your prod bucket from the dev account.
There are two ways to do this, Method 1 is your approach however I would suggest Method 2:
Method 1: Use roles. This is what you described above, it's great, however, you cannot sync bucket to bucket if they're on different accounts as different access keys will need to be exported each time. You'll most likely have to sync the files from the prod bucket to the local fs, then from the local fs to the dev bucket.
How to do this:
Using roles, create a role on the production account that has access to the bucket. The trust relationship of this role must trust the role on the dev account that's assigned to the ec2 instance. Attach the policy granting access to the prod bucket to that role. Once that's all configured, the ec2 instance role in dev must be updated to allow sts:AssumeRole of that role you've defined in production. On the ec2 instance in dev you will need to run aws sts assume-role --role-arn <the role on prod> --role-session-name <a name to identify the session>. This will give you back 3 variables, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_SESSION_TOKEN. On your ec2 instance, run set -a; AWS_SECRET_ACCESS_KEY=${secret_access_key};
AWS_ACCESS_KEY_ID=${access_key_id}; AWS_SESSION_TOKEN=${session_token}. Once those variables have been exported, you can run aws sts get-caller-identity and that should come back showing you that you're on the role you've provisioned in production. You should now be able to sync the files to the local system, and once that's done, unset the aws keys we set as env variables, then copy the files from the ec2 instance to the bucket in dev. Notice how there are two steps here to copy them? that can get quite annoying - look into method 2 on how to avoid this:
Method 2: Update the prod bucket policy to trust the dev account - this will mean you can access the prod bucket from dev and do a bucket to bucket sync/cp.
I would highly recommend you take this approach as it will mean you can copy directly between buckets without having to sync to the local fs.
To do this, you will need to update the bucket policy on the bucket in production to have a principals block that trusts the AWS account id of dev. An example of this is, update your prod bucket policy to look something like this:
NOTE: granting s3:* is bad, and granting full access to the account prob isnt suggested as anyone on the account with the right s3 permissions can now access this bucket, but for simplicity I'm going to leave this here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DEV_ACC_ID:root"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::PROD_BUCKET_NAME",
"arn:aws:s3:::PROD_BUCKET_NAME/*"
]
}
]
}
Once you've done this, on the dev account, attach the policy in your main post to the dev ec2 instance role (the one that grants s3 access). Now when you connect to the dev instance, you do not have to export any environment variables, you can simply run aws s3 ls s3://prodbucket and it should list the files.
You can sync the files between the two buckets using aws s3 sync s3://prodbucket s3://devbucket --acl bucket-owner-full-control and that should copy all the files from prod to dev, and on top of that should update the ACLs of each file so that dev owns them (meaning you have full access to the files in dev).
You need to assume the role in the production account from the dev account. Call sts:AssumeRole and then use the credentials returned to access the bucket.
You can alternatively add a bucket policy that allows the dev account to read from the prod account. You wouldn't need the cross account role in the prod account in this case.

Why AWS Bucket Policy NotPrincipal with specific user doesn't work with aws client when no profile is specified?

I have this AWS S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OnlyS3AdminCanPerformOperationPolicy",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<account-id>:user/s3-admin"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Side note: IAM s3-admin user has AdministratorAccess policy attached.
At first I though the bucket policy didn't worked. It was probably because of the way I tested the operation.
aws s3 rm s3://my-bucket-name/file.csv
Caused:
delete failed: s3://test-cb-delete/buckets.csv An error occurred (AccessDenied)
but I if used --profile default as per
aws s3 --profile default rm s3://my-bucket-name/file.csv
It worked.
I verified and only have one set of credentials configured for the aws client. Also, I am able to list the content of the bucket even when I don't use the --profile default argument.
Why is the aws client behaving that way?
Take a look at the credential precedence provider chain and use that to determine what is different about the two credentials you're authenticating as.
STS has a handly API that tells you who you are. It's similar to the UNIX-like command whoami, except for AWS Principals. To see which credential is which, do this:
aws sts get-caller-identity
aws sts --profile default get-caller-identity