I have been working with AWS for a number of years, but I am not very strong with some of the advanced networking concepts.
So, we have multiple AWS accounts. None of them have public internet access, but we use Direct Connect for on-prem to AWS connection.
I have a S3 bucket in Account A.
I created an IAM user in Account A along with a access/secret key and granted this IAM user s3:PutObject permission to the S3 bucket.
I write a simple Python script to list the objects in this bucket from on-prem, it works, as expected.
I then execute the same Python script on an EC2 instance running in Account B, I get "botocore.exceptions.ClientError: An error occured (AccessDenied) when calling the ListObjects operation: Access Denied".
Do I need to create VPC endpoint for S3 in Account B? Does cross account IAM role come into play here?
Your situation is:
You have an Amazon S3 Bucket-A in Account-A
You have an IAM User (User-A) in Account-A
You have an Amazon EC2 instance running in Account-B
From that EC2 instance, you wish to access Bucket-A
It also appears that you have a means for the EC2 instance to access Amazon S3 endpoints to make API calls
Assuming that the instance is able to reach Amazon S3 (which appears to be true because the error message refers to permissions, which would have come from S3), there are two ways to authenticate for access to Bucket-A:
Option 1: Using the IAM User from Account-A
When making the call from the EC2 instance to Bucket-A, use the IAM credentials created in Bucket-A. It doesn't matter that the request is coming from an Amazon EC2 instance in Account-B. In fact, Amazon S3 doesn't even know that. An API call can come from anywhere on the Internet (including your home computer or mobile phone). What matters is the set of credentials provided when making the call.
If you are using the AWS Command-Line Interface (CLI) to make the call, then you can save the User-A credentials as a profile by using aws configure --profile user_a (or any name), then entering the credentials from the IAM User in Account-A. Then, access Amazon S3 with aws s3 ls --profile user_a. Using a profile like this allows you to switch between credentials.
Option 2: Using a Bucket Policy
Amazon S3 also has the ability to specify a Bucket Policy on a bucket, which can grant access to the bucket. So, if the EC2 instance is using credentials from Account-B, you can add a Bucket Policy that grants access from those Account-B credentials.
Let's say that the Amazon EC2 instance was launched with an IAM Role called role-b, then you could use a Bucket Policy like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-B>:role/role-b"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-a/*"
}
]
}
Disclaimer: All of the above assumes that you don't have any weird policies on your VPC Endpoints / Amazon S3 Access Points or however the VPCs are connecting with the Amazon S3 endpoints.
Related
I have two AWS accounts (E.g. Account A & Account B). I have created a user with and attached a policy (Costumer Managed) Which has the following permission in account A.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::{ACCOUNT-B_ACCOUNT-ID-WITHOUT-HYPHENS}:distribution/{ACCOUNT_B-CF-DISTRIBUTION-ID}"
}
]
}
From AWS-CLI (Which is configured with Account A's user) I'm trying to create invalidation for the above mentioned CF distribution ID in Account B. I'm getting access denied.
Do we need any other permission to create invalidation for CF distribution in different AWS account?
I have been able to successfully perform a cross-account CloudFront invalidation from my CodePipeline account (TOOLS) to my application (APP) accounts. I achieve this with a Lambda Action that is executed as follows:
CodePipeline starts a Deploy stage I call Invalidate
The Stage runs a Lambda function with the following UserParameters:
APP account roleArn to assume when creating the Invalidation.
The ID of the CloudFront distribution in the APP account.
The paths to be invalidated.
The Lambda function is configured to run with a role in the TOOLS account that can sts:AssumeRole of a role from the APP account.
The APP account role permits being assumed by the TOOLS account and permits the creation of Invalidations ("cloudfront:GetDistribution","cloudfront:CreateInvalidation").
The Lambda function executes and assumes the APP account role. Using the credentials provided by the APP account role, the invalidation is started.
When the invalidation has started, the Lambda function puts a successful Job result.
It's difficult and unfortunate that cross-account invalidations are not directly supported. But it does work!
Cross account access only available for few AWS Services like Amazon Simple Storage Service (S3) buckets, S3 Glacier vaults, Amazon Simple Notification Service (SNS) topics, and Amazon Simple Queue Service (SQS) queues.
Refer: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html (Role for cross-account access section)
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
I would like to provide data that should be very simple for clients to download to their instances. Ideally, automatically via the post_install script option of AWS ParallelCluster.
However, it seems like this requires a lot of setup, as is described in this tutorial by AWS:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/
This is not feasible for me. Clients should not have to create IAM roles.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
export AWS_ACCESS_KEY_ID=<key-id>
export AWS_SECRETE_ACCESS_KEY=<secret-key>
aws s3 cp s3://<bucket> . --recursive
Unfortunately, this is also not ideal as I would like to provide ready-to-use AWS Parallelcluster post_install scripts. These scripts should automatically download the required data on cluster startup.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
Yes. It's a 2 step process. In summary:
1) On your side, the bucket must trust the account id of the other accounts that will access it, and you must decide which calls you will allow.
2) On the other accounts that will access the bucket, the instances must be authorised to run AWS API calls on your bucket using IAM policies.
In more detail:
Step 1: let's work through this and break it down.
On your bucket, you'll need to configure a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID_TO_TRUST:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
You can find more examples of bucket policies in the AWS documentation here.
WARNING 1: "arn:aws:iam::ACCOUNT_ID:root" will trust everything that has permissions to connect to your bucket on the other AWS account. This shouldn't be a problem for what you're trying to do, but it's best you completely understand how this policy works to prevent any accidents.
WARNING 2: Do not grant s3:* - you will need to scope down the permissions to actions such as s3:GetObject etc. There is a website to help you generate these policies here. s3:* will contain delete permissions which if used incorrectly could result in nasty surprises.
Now, once that's done, great work - that's things on your end covered.
Step 2: The other accounts that want to read the data will have to assign an instance role to the ec2 instances they launch and that role will need a policy attached to it granting access to your bucket. Those instances can then run AWS CLI commands on your bucket, provided your bucket policy authorises the call on your side and the instance policy authorises the call on their side.
The policy that needs to be attached to the instance role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
Keep in mind, just because this policy grants s3:* it doesn't mean they can do anything on your bucket, not unless you have s3:* in your bucket policy. Actions of this policy will be limited to whatever you've scoped the permissions to in your bucket policy.
This is not feasible for me. Clients should not have to create IAM roles.
If they have an AWS account it's up to them on how they choose to access the bucket as long as you define a bucket policy that trusts their account the rest is on them. They can create an ec2 instance role and grant it permissions to your bucket, or an IAM User and grant it access to your bucket. It doesn't matter.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
If the code will run on an ec2 instance, it's bad practice to use access keys and instead should use an ec2 instance role.
Ideally, automatically via CloudFormation on instance startup.
I think you mean via instance userdata, which you can define through CloudFormation.
You say "Clients should not have to create IAM roles". This is perfectly correct.
I presume that you are creating the instances for use by the clients. If so, then you should create an IAM Role that has access to the desired bucket.
Then, when you create an Amazon EC2 instance for your clients, associate the IAM Role to the instance. Your clients will then be able to use the AWS Command-Line Interface (CLI) to access the S3 bucket (list, upload, download, or whatever permissions you put into the IAM Role).
If you want the data to be automatically downloaded when you first create their instance, then you can add User Data script that will execute when the instance starts. This can download the files from S3 to the instance.
For security reasons, we have a dev, QA, and a prod AWS account. We are using IAM roles for instances. This is working correctly per account basis.
Now the recruitment here is we want to access multiple aws services {such as S3, SQS, SNS, EC2,etc.} on one of EC2 instance of QA account from Dev aws account.
We have created STS policy and role allowing Trusted entities as another AWS account, but somehow not able to attach this role to EC2 instance.
Example STS policy:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::546161XXXXXX:role/AdminAccessToAnotherAccount"
}
}
AdminAccessToAnotherAccount: This aws policy on another account with admin access.
This role is not listed while attaching to the ec2 instance.
It appears that your situation is:
You have an EC2 instance in Account-1
An IAM Role ("Role-1") is assigned to the EC2 instance
You want to access resources in Account-2 from the EC2 instance
The following steps can enable this:
Create an IAM Role in Account-2 ("Role-2") with the permissions you want the instance to receive
Add a Trust policy to Role-2, trusting Role-1
Confirm that Role-1 has permission to call AssumeRole on Role-2
From the EC2 instance using Role-1, call AssumeRole on Role-2
It will return a set of credentials (Access Key, Secret Key, Token)
Use those credentials to access services in Account-2 (via aws configure --profile foo or an API call).
If use aws configure, you will also need to manually edit the ~/.aws/credentials file to add the aws_session_token to the profile, since it is not requested by the CLI command.
Examples:
Using Temporary Security Credentials to Request Access to AWS Resources
Cross-Account Access Control with Amazon STS for DynamoDB
I have an IAM role associated with my EC2 instances with the following policy regarding Redshift:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"redshift:*"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
However, from my EC2 instance with either the AWS SDK or CLI, I am unable to create a cluster.
Both
InstanceProfileCredentialsProvider creds = new InstanceProfileCredentialsProvider(false);
AmazonRedshift redshift = AmazonRedshiftClientBuilder.standard().withCredentials(creds).build();
CreateClusterRequest request = new CreateClusterRequest();
request.setAllowVersionUpgrade(true);
request.setClusterType("multi-node");
request.setClusterIdentifier("identifier");
request.setDBName("dbname");
request.setIamRoles(Collections.singleton("iam_role_arn")));
request.setPubliclyAccessible(true);
request.setNodeType("dc1.8xlarge");
request.setNumberOfNodes(2);
request.setPort(8192);
request.setMasterUsername("username");
request.setMasterUserPassword("Password1");
Cluster cluster = redshift.createCluster(request);
and
aws redshift create-cluster --cluster-identifier identifier --master-username username --master-user-password Password1 --node-type dc1.8xlarge --region us-west-2 --number-of-nodes 2
result in:
An error occurred (UnauthorizedOperation) when calling the CreateCluster operation: Access Denied. Please ensure that your IAM Permissions allow this operation.
Using the IAM policy simulation tool I was able to confirm that my instance role has the permissions to create a Redshift cluster.
Any help understanding this would be appreciated.
If you are able to call other Redshift operations and it is only failing on createCluster(), then the error is probably being caused by the IAM Role that is being passed to Redshift.
You need to Grant a User Permissions to Pass a Role to an AWS Service.
The logic behind it is this:
Let's say you do not have access to S3 bucket X
Let's say there is a role in your account that does have access to bucket X
You could launch a Redshift cluster with the role and then use it to access bucket X
To prevent people cheating like this, they need PassRole permission that states whether they are allowed to pass a role to Redshift and even which roles they are allowed to pass.
The same applies to passing roles to an Amazon EC2 instance.
Using cloudformation I have launched an EC2 instance with a role that has an S3 policy which looks like the following
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
In S3 the bucket policy is like so
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "ReadAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456678:role/Production-WebRole-1G48DN4VC8840"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web-deploy/*"
}
]
}
When I login to the instance and attempt to curl any object I upload into the bucket (without acl modifications) I receive and Unauthorized 403 error.
Is this the correct way to restrict access to a bucket to only instances launched with a specific role?
The EC2 instance role is more than sufficient to put/read to any of your S3 buckets, but you need to use the instance role, which is not done automatically by curl.
You should use for example aws s3 cp <local source> s3://<bucket>/<key>, which will automatically used the instance role.
There are three ways to grant access to an object in Amazon S3:
Object ACL: Specific objects can be marked as "Public", so anyone can access them.
Bucket Policy: A policy placed on a bucket to determine what access to Allow/Deny, either publicly or to specific Users.
IAM Policy: A policy placed on a User, Group or Role, granting them access to an AWS resource such as an Amazon S3 bucket.
If any of these policies grant access, the user can access the object(s) in Amazon S3. One exception is if there is a Deny policy, which overrides an Allow policy.
Role on the Amazon EC2 instance
You have granted this role to the Amazon EC2 instance:
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
This will provide credentials to the instance that can be accessed by the AWS Command-Line Interface (CLI) or any application using the AWS SDK. They will have unlimited access to Amazon S3 unless there is also a Deny policy that otherwise restricts access.
If anything, that policy is granting too much permission. It is allowing an application on that instance to do anything it wants to your Amazon S3 storage, including deleting it all! It is better to assign least privilege, only giving permission for what the applications need to do.
Amazon S3 Bucket Policy
You have also created a Bucket Policy, which allows anything that has assumed the Production-WebRole-1G48DN4VC8840 role to retrieve the contents of the web-deploy bucket.
It doesn't matter what specific permissions the role itself has -- this policy means that merely using the role to access the web-deploy bucket will allow it to read all files. Therefore, this policy alone would be sufficient to your requirement of granting bucket access to instances using the Role -- you do not also require the policy within the role itself.
So, why can't you access the content? It is because using a straight CURL does not identify your role/user. Amazon S3 receives the request and treats it as anonymous, thereby not granting access.
Try accessing the data via the CLI or programmatically via an SDK call. For example, this CLI command would download an object:
aws s3 cp s3://web-deploy/foo.txt foo.txt
The CLI will automatically grab credentials related to your role, allowing access to the objects.