I have two accounts, a1 and a2.
I have an EC2 instance in a1, a1.ec2. It assumes some role in that account, a1.r. This role has full access to all ECR actions.
Now, I have an image registry (ECR) in a2 and would like to be able to access it from a1.ec2.
So, I ssh into that instance and in order to test the access I run
aws ecr describe-repositories --region <my-region> --registry-id <id of a2>
But I get the error
An error occurred (AccessDeniedException) when calling the DescribeRepositories operation: User: arn:aws:sts::<id of a1>:assumed-role/a1.r/i-075fad654b998275c is not authorized to perform: ecr:DescribeRepositories on resource: arn:aws:ecr:*:*:repository/*
However, this permission is indeed granted to the role a1.r. I verified this by being able to access an ECR in a1 just fine.
Also, the ECR I like to access has the following permission policies, so I make sure that the trouble is not caused by the ECR of a2:
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a1>:root"
},
"Action": "*"
},
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a1>:role/a1.r"
},
"Action": "*"
}
I had a look at https://serverfault.com/questions/897392/ecr-cross-account-pull-permissions where the solution appears to be to create cross-account roles. Although I could create such a role a2.cross-acc-r, I cannot figure out how I can assume that role for the the aws ecr cli commands. I do not want the EC2 instance to assume that role, as it resides in a different account (not even sure if that is possible at all).
Am I lacking something basic regarding how AWS IAM works?
If you want to pull and push images from one account's EC2 instance into another account's ECR, and do not need the full aws ecr CLI functionality, you can do so through docker.
For example, if you want your Jenkins to push built images into ECRs based on the targeted environment (production, staging) residing in different AWS accounts.
Doing so via docker is documented at https://aws.amazon.com/premiumsupport/knowledge-center/secondary-account-access-ecr/
Put simply, in the ECR repository, you grant the other account the needed permissions.
Then you get a temporary authentication token to authorize docker towards ECR via:
$(aws ecr get-login --registry-ids <account ID> --region <your region> --no-include-email)
After this, you can use docker pull and docker push to access it.
I had a look at https://serverfault.com/questions/897392/ecr-cross-account-pull-permissions where the solution appears to be to create cross-account roles. Although I could create such a role a2.cross-acc-r, I cannot figure out how I can assume that role for the aws ecr CLI commands. I do not want the EC2 instance to assume that role, as it resides in a different account (not even sure if that is possible at all).
You can do that by following the steps below:
In account A, I created a role (e.g RoleForB) to trust account B, and attach to the before created role an IAM policy to allow it to perform some read operations in the account A. e.g ReadOnlyAccess
In account B, I created a role (e.g AssumeRoleInA) and attach a policy to allow it to assume the role that is created in account A.
In account B Associate to your EC2 instance ec2-profile the IAM role (AssumeRoleInA) which was created in step 2.
In account B login into this EC2 instance to assume the role in Account A using the command aws sts assume-role --role-arn "arn:aws:iam::Account_A_ID:role/RoleForB" --role-session-name "EC2FromB".
In account B EC2 terminal when the command is step 4. finished, you can see the access key ID, secret access key, and session token from wherever you've routed it, in our case stdout either manually or by using a script. You can then assign these values to environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN)
So Let’s check the configurations mentioned above step by step but with some mode detail:
As before presented in account A, it builds the trust to account B by creating the role named RoleForB and attaching ReadOnlyAccess permission to it.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::Account_B_ID:root"},
"Action": "sts:AssumeRole"
}
}
In account B, create a role named AssumeRoleInA then attach the corresponding policy to allow it to assume the role named RoleForB in account A.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::Account_A_ID:role/RoleForB"
]
}
]
}
In account B, create a new EC2 instance (if it does not exists yet), and associate it's ec2-profile with the IAM role named AssumeRoleInA.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
In account B login into this EC2 instance to assume the role in Account A using the command:
aws sts assume-role --role-arn "arn:aws:iam::Account_A_ID:role/RoleForB" --role-session-name "EC2FromB"`
You need to setup a trust relationship between your account a1 and a2.
From your a2 Console, go to IAM service, create a new role:
1) Trusted Entity: Another AWS Account (input account a1's ID)
2) Policy: AmazonEC2ContainerRegistryPowerUser (or others that meet your requirement)
From your a2 Console, go to ECR service, you need to edit your permission:
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a1>:root"
},
"Action": "*"
},
{
"Sid": "new statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<id of a2>:role/a2.r"
},
"Action": "*"
}
}
Related
I am trying to write a batch file in windows to do below steps by CLI command(actual example), but I don't know how to create a role and set cli command for "Another AWS account" role type. Do you mind help me?
In the navigation pane on the left, choose Roles and then choose
Create role.
Choose the Another AWS account role type.
For Account ID, type the Development account ID.
This tutorial uses the example account ID 111111111111 for the
Development account. You should use a valid account ID. If you use an
invalid account ID, such as 111111111111, IAM does not let you create
the new role.
For now you do not need to require an external ID, or require users to
have multi-factor authentication (MFA) in order to assume the role. So
leave these options unselected. For more information, see Using
Multi-Factor Authentication (MFA) in AWS
Choose Next: Permissions to set the permissions that will be
associated with the role.
my codes for creating a role:
call aws iam create-role --role-name xxx-S3-Role --assume-role-policy-document file://trustpolicy.json
my trustpolicy.json
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::222222075333:role/xxx-S3-Role"
}]
}
I am receiving below error:
An error occurred (MalformedPolicyDocument) when calling the CreateRole operation: Has prohibited field Resource
I solve my problem by changing two parts.
1- by fix the path of policy
aws iam create-role --role-name xxx-S3-Role --assume-role-policy-document file://c:\foldername\trustpolicy.json
2- I change the format of the policy by reverse engineering a policy that I created from the console, the format is in below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222222075333:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
I am trying to restore a database as a part of our testing. The backups exists on the prod s3 account. My database is running as ec2 instance in dev account.
Can anyone tell me how can i access the prod s3 from dev account.
Steps:
- i created a role on prod account and with trusted relationship with the dev account
- i added a policy to the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::prod"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::prod/*"
}
]
}
on dev account i created a role and with assume policy
> { "Version": "2012-10-17", "Statement": [
> {
> "Effect": "Allow",
> "Action": "sts:AssumeRole",
> "Resource": "arn:aws:iam::xxxxxxxxx:role/prod-role"
> } ] }
But i am unable to access the s3 bucket, can someone point me where i am wrong.
Also i added the above policy to an existing role. so does that mean its not working because of my instance profile ( inconsistent error)
Please help and correct me if i am wrong anywhere. I am looking for a solution in terms of a role and not as a user.
Thanks in advance!
So lets recap: you want to access your prod bucket from the dev account.
There are two ways to do this, Method 1 is your approach however I would suggest Method 2:
Method 1: Use roles. This is what you described above, it's great, however, you cannot sync bucket to bucket if they're on different accounts as different access keys will need to be exported each time. You'll most likely have to sync the files from the prod bucket to the local fs, then from the local fs to the dev bucket.
How to do this:
Using roles, create a role on the production account that has access to the bucket. The trust relationship of this role must trust the role on the dev account that's assigned to the ec2 instance. Attach the policy granting access to the prod bucket to that role. Once that's all configured, the ec2 instance role in dev must be updated to allow sts:AssumeRole of that role you've defined in production. On the ec2 instance in dev you will need to run aws sts assume-role --role-arn <the role on prod> --role-session-name <a name to identify the session>. This will give you back 3 variables, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_SESSION_TOKEN. On your ec2 instance, run set -a; AWS_SECRET_ACCESS_KEY=${secret_access_key};
AWS_ACCESS_KEY_ID=${access_key_id}; AWS_SESSION_TOKEN=${session_token}. Once those variables have been exported, you can run aws sts get-caller-identity and that should come back showing you that you're on the role you've provisioned in production. You should now be able to sync the files to the local system, and once that's done, unset the aws keys we set as env variables, then copy the files from the ec2 instance to the bucket in dev. Notice how there are two steps here to copy them? that can get quite annoying - look into method 2 on how to avoid this:
Method 2: Update the prod bucket policy to trust the dev account - this will mean you can access the prod bucket from dev and do a bucket to bucket sync/cp.
I would highly recommend you take this approach as it will mean you can copy directly between buckets without having to sync to the local fs.
To do this, you will need to update the bucket policy on the bucket in production to have a principals block that trusts the AWS account id of dev. An example of this is, update your prod bucket policy to look something like this:
NOTE: granting s3:* is bad, and granting full access to the account prob isnt suggested as anyone on the account with the right s3 permissions can now access this bucket, but for simplicity I'm going to leave this here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DEV_ACC_ID:root"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::PROD_BUCKET_NAME",
"arn:aws:s3:::PROD_BUCKET_NAME/*"
]
}
]
}
Once you've done this, on the dev account, attach the policy in your main post to the dev ec2 instance role (the one that grants s3 access). Now when you connect to the dev instance, you do not have to export any environment variables, you can simply run aws s3 ls s3://prodbucket and it should list the files.
You can sync the files between the two buckets using aws s3 sync s3://prodbucket s3://devbucket --acl bucket-owner-full-control and that should copy all the files from prod to dev, and on top of that should update the ACLs of each file so that dev owns them (meaning you have full access to the files in dev).
You need to assume the role in the production account from the dev account. Call sts:AssumeRole and then use the credentials returned to access the bucket.
You can alternatively add a bucket policy that allows the dev account to read from the prod account. You wouldn't need the cross account role in the prod account in this case.
I am trying to create a CloudFormation Stack using the AWS CLI by running the following command:
aws cloudformation create-stack --debug --stack-name ${stackName} --template-url ${s3TemplatePath} --parameters '${parameters}' --region eu-west-1
The template resides in an S3 bucket in the another account, lets call this account 456. The bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root"
]
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::cloudformation.template.eberry.digital/*"
}
]
}
("Action: * " is for debugging).
Now for a twist. I am logged into account 456 and I run
aws sts assume-role --role-arn arn:aws:iam::123:role/delegate-access-to-infrastructure-account-role --role-session-name jenkins
and the set the correct environment variables to access 123. The policy attached to the role that I assume allow the user Administrator access while I debug - which still doesn't work.
aws s3api list-buckets
then display the buckets in account 123.
To summarize:
Specifying a template in an S3 bucket owned by account 456, into CloufFormation in the console, while logged into account 123 works.
Specifying a template in an S3 bucket owned by account 123, using the CLI, works.
Specifying a template in an S3 bucket owned by account 456, using the CLI, doesn't work.
The error:
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
I don't understand what I am doing wrong and would by thankful for any ideas. In the meantime I will upload the template to all accounts that will use it.
Amazon S3 provides cross-account access through the use of bucket policies. These are IAM resource policies (which are applied to resources—in this case an S3 bucket—rather than IAM principals: users, groups, or roles). You can read more about how Amazon S3 authorises access in the Amazon S3 Developer Guide.
I was a little confused about which account is which, so instead I'll just say that you need this bucket policy when you want to deploy a template in a bucket owned by one AWS account as a stack in a different AWS account. For example, the template is in a bucket owned by AWS account 111111111111 and you want to use that template to deploy a stack in AWS account 222222222222. In this case, you'll need to be logged in to account 222222222222 and specify that account as the principal in the bucket policy.
The following is an example bucket policy that provides access to another AWS account; I use this on my own CloudFormation templates bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_ID_WITHOUT_HYPHENS:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::S3_BUCKET_NAME",
"arn:aws:s3:::S3_BUCKET_NAME/*"
]
}
]
}
You'll need to use the 12-digit account identifier for the AWS account you want to provide access to, and the name of the S3 bucket (you can probably use "Resource": "*", but I haven't tested this).
There's some CSV data files I need to get in S3 buckets belonging to a series of AWS accounts belonging to a third-party; the owner of the other accounts has created a role in each of the accounts which grants me access to those files; I can use the AWS web console (logged in to my own account) to switch to each role and get the files. One at a time, I switch to the role for each of the accounts and then get the files for that account, then move on to the next account and get those files, and so on.
I'd like to automate this process.
It looks like AWS Glue can do this, but I'm having trouble with the permissions.
What I need it to do is create permissions so that an AWS Glue crawler can switch to the right role (belonging to each of the other AWS accounts) and get the data files from the S3 bucket of those accounts.
Is this possible and if so how can I set it up? (e.g. what IAM roles/permissions are needed?) I'd prefer to limit changes to my own account if possible rather than having to ask the other account owner to make changes on their side.
If it's not possible with Glue, is there some other easy way to do it with a different AWS service?
Thanks!
(I've had a series of tries but I keep getting it wrong - my attempts are so far from being right that there's no point in me posting the details here).
Yes, you can automate your scenario with Glue by following these steps:
Create an IAM role in your AWS account. This role's name must start with AWSGlueServiceRole but you can append whatever you want. Add a trust relationship for Glue, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Attach two IAM policies to your IAM role. The AWS managed policy named AWSGlueServiceRole and a custom policy that provides the access needed to all the target cross account S3 buckets, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::examplebucket1",
"arn:aws:s3:::examplebucket2",
"arn:aws:s3:::examplebucket3"
]
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::examplebucket1/*",
"arn:aws:s3:::examplebucket2/*",
"arn:aws:s3:::examplebucket3/*"
]
}
]
}
Add S3 bucket policies to each target bucket that allows your IAM role the same S3 access that you granted it in your account, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::examplebucket1"
},
{
"Sid": "ObjectAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your_account_number:role/AWSGlueServiceRoleDefault"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket1/*"
}
]
}
Finally, create Glue crawlers and jobs in your account (in the same regions as the target cross account S3 buckets) that will ETL the data from the cross account S3 buckets to your account.
Using the AWS CLI, you can create named profiles for each of the roles you want to switch to, then refer to them from the CLI. You can then chain these calls, referencing the named profile for each role, and include them in a script to automate the process.
From Switching to an IAM Role (AWS Command Line Interface)
A role specifies a set of permissions that you can use to access AWS
resources that you need. In that sense, it is similar to a user in AWS
Identity and Access Management (IAM). When you sign in as a user, you
get a specific set of permissions. However, you don't sign in to a
role, but once signed in as a user you can switch to a role. This
temporarily sets aside your original user permissions and instead
gives you the permissions assigned to the role. The role can be in
your own account or any other AWS account. For more information about
roles, their benefits, and how to create and configure them, see IAM
Roles, and Creating IAM Roles.
You can achieve this with AWS lambda and Cloudwatch Rules.
You can create a lambda function that has a role attached to it, lets call this role - Role A, depending on the number of accounts you can either create 1 function per account and create one rule in cloudwatch to trigger all functions or you can create 1 function for all the accounts (be cautious to the limitations of AWS Lambda).
Creating Role A
Create an IAM Role (Role A) with the following policy allowing it to assume the role given to you by the other accounts containing the data.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1509358389000",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"",
"",
....
"
]// all the IAM Role ARN's from the accounts containing the data or if you have 1 function for each account you can opt to have separate roles
}
]
}
Also you will need to make sure that a trust relationship with all the accounts are present in Role A's Trust Relationship policy document.
Attach Role A to the lambda functions you will be running. you can use serverless for development.
Now your lambda function has Role A attached to it and Role A has sts:AssumeRole permissions over the role's created in the other accounts.
Assuming that you have created 1 function for 1 account in you lambda's code you will have to first use STS to switch to the role of the other account and obtain temporary credentials and pass these to S3 options before fetching the required data.
if you have created 1 function for all the accounts you can have the role ARN's in an array and iterate over it, again when doing this be aware of the limits of AWS lambda.
Scenario: I have an EC2 instance and a S3 bucket under the same account, and my web app on that EC2 wants access to resources in that bucket.
Following official docs, I created an IAM role with s3access and assigned it to the EC2 instance. To my understanding, now my web app should be able to access the bucket. However, after trials, seems I have to add a allowPublicRead bucket policy like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
Otherwise I got access forbidden.
But why should I use this allowPublicRead bucket policy, since I already granted s3access IAM role to the EC2 instance?
S3 s3:GetObject will only allow access to objects from your ec2 instance and what you want is to access these objects from your web-app which means from your browser, in this case these images/objects will be rendered to user browser and if its a public facing application then you need to assign AllowPublicRead permission as well.