Create CloudFront Invalidations in Cross AWS account - amazon-web-services

I have two AWS accounts (E.g. Account A & Account B). I have created a user with and attached a policy (Costumer Managed) Which has the following permission in account A.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::{ACCOUNT-B_ACCOUNT-ID-WITHOUT-HYPHENS}:distribution/{ACCOUNT_B-CF-DISTRIBUTION-ID}"
}
]
}
From AWS-CLI (Which is configured with Account A's user) I'm trying to create invalidation for the above mentioned CF distribution ID in Account B. I'm getting access denied.
Do we need any other permission to create invalidation for CF distribution in different AWS account?

I have been able to successfully perform a cross-account CloudFront invalidation from my CodePipeline account (TOOLS) to my application (APP) accounts. I achieve this with a Lambda Action that is executed as follows:
CodePipeline starts a Deploy stage I call Invalidate
The Stage runs a Lambda function with the following UserParameters:
APP account roleArn to assume when creating the Invalidation.
The ID of the CloudFront distribution in the APP account.
The paths to be invalidated.
The Lambda function is configured to run with a role in the TOOLS account that can sts:AssumeRole of a role from the APP account.
The APP account role permits being assumed by the TOOLS account and permits the creation of Invalidations ("cloudfront:GetDistribution","cloudfront:CreateInvalidation").
The Lambda function executes and assumes the APP account role. Using the credentials provided by the APP account role, the invalidation is started.
When the invalidation has started, the Lambda function puts a successful Job result.
It's difficult and unfortunate that cross-account invalidations are not directly supported. But it does work!

Cross account access only available for few AWS Services like Amazon Simple Storage Service (S3) buckets, S3 Glacier vaults, Amazon Simple Notification Service (SNS) topics, and Amazon Simple Queue Service (SQS) queues.
Refer: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html (Role for cross-account access section)

Related

Allowing permission to Generate a policy based on CloudTrail events where the selected Trail logs events in an S3 bucket in another account

I have an AWS account (Account A) with CloudTrail enabled and logging management events to an S3 'logs' bucket in another, dedicated logs account (Account B, which I also own).
The logging part works fine, but I'm now trying (and failing) to use the 'Generate policy based on CloudTrail events' tool in the IAM console (under the Users > Permissions tab) in Account A.
This is supposed to read the CloudTrail logs for a given user/region/no. of days, identify all of the actions the user performed, then generate a sample IAM security policy to allow only those actions, which is great for setting up least privilege policies etc.
When I first ran the generator, it created a new service role to assume in the same account (Account A): AccessAnalyzerMonitorServiceRole_ABCDEFGHI
When I selected the CloudTrail trail to analyse, it (correctly) identified that the trail logs are stored in an S3 bucket in another account, and displayed this warning messsage:
Important: Verify cross-account access is configured for the selected
trail The selected trail logs events in an S3 bucket in another
account. The role you choose or create must have read access to the
bucket in that account to generate a policy. Learn more.
Attempting to run the generator at this stage fails after a short amount of time, and if you hover over the 'Failed' status in the console you see the message:
Incorrect permissions assigned to access CloudTrail S3 bucket. Please
fix before trying again.
Makes sense, but actually giving read access to the S3 bucket to the automatically generated AccessAnalyzerMonitorServiceRole_ABCDEFGHI is where I'm now stuck!
I'm relatively new to AWS so I might have done something dumb or be missing something obvious, but I'm trying to give the automatically generated role in Account A permission to the S3 bucket by adding to the 'Bucket Policy' attached to the S3 logs bucket in our Account B. I've added the below extract to the existing bucket policy (which is just the standard policy for a CloudTrail logs bucket, extended to allow CloudTrail in Account A to write logs to it as well.
"Sid": "IAMPolicyGeneratorRead",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567890:role/service-role/AccessAnalyzerMonitorServiceRole_ABCDEFGHI"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::aws-cloudtrail-logs-ABCDEFGHI",
"arn:aws:s3:::aws-cloudtrail-logs-ABCDEFGHI/*"
]
}
Any suggestions how I can get this working?
Turns out I just needed to follow the steps described here: https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html#access-analyzer-policy-generation-cross-account in the section 'Generate a policy using AWS CloudTrail data in another account', specifically for the 'Object Ownership' settings in addition to changing my Bucket Policy to match the example.

AWS Firehose delivery to Cross Account Elasticsearch in VPC

I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/

Cross account access for AWS accounts using Direct Connect

I have been working with AWS for a number of years, but I am not very strong with some of the advanced networking concepts.
So, we have multiple AWS accounts. None of them have public internet access, but we use Direct Connect for on-prem to AWS connection.
I have a S3 bucket in Account A.
I created an IAM user in Account A along with a access/secret key and granted this IAM user s3:PutObject permission to the S3 bucket.
I write a simple Python script to list the objects in this bucket from on-prem, it works, as expected.
I then execute the same Python script on an EC2 instance running in Account B, I get "botocore.exceptions.ClientError: An error occured (AccessDenied) when calling the ListObjects operation: Access Denied".
Do I need to create VPC endpoint for S3 in Account B? Does cross account IAM role come into play here?
Your situation is:
You have an Amazon S3 Bucket-A in Account-A
You have an IAM User (User-A) in Account-A
You have an Amazon EC2 instance running in Account-B
From that EC2 instance, you wish to access Bucket-A
It also appears that you have a means for the EC2 instance to access Amazon S3 endpoints to make API calls
Assuming that the instance is able to reach Amazon S3 (which appears to be true because the error message refers to permissions, which would have come from S3), there are two ways to authenticate for access to Bucket-A:
Option 1: Using the IAM User from Account-A
When making the call from the EC2 instance to Bucket-A, use the IAM credentials created in Bucket-A. It doesn't matter that the request is coming from an Amazon EC2 instance in Account-B. In fact, Amazon S3 doesn't even know that. An API call can come from anywhere on the Internet (including your home computer or mobile phone). What matters is the set of credentials provided when making the call.
If you are using the AWS Command-Line Interface (CLI) to make the call, then you can save the User-A credentials as a profile by using aws configure --profile user_a (or any name), then entering the credentials from the IAM User in Account-A. Then, access Amazon S3 with aws s3 ls --profile user_a. Using a profile like this allows you to switch between credentials.
Option 2: Using a Bucket Policy
Amazon S3 also has the ability to specify a Bucket Policy on a bucket, which can grant access to the bucket. So, if the EC2 instance is using credentials from Account-B, you can add a Bucket Policy that grants access from those Account-B credentials.
Let's say that the Amazon EC2 instance was launched with an IAM Role called role-b, then you could use a Bucket Policy like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-B>:role/role-b"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-a/*"
}
]
}
Disclaimer: All of the above assumes that you don't have any weird policies on your VPC Endpoints / Amazon S3 Access Points or however the VPCs are connecting with the Amazon S3 endpoints.

Allow access to S3 Bucket from all EC2 instances of specific Account

Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
I would like to provide data that should be very simple for clients to download to their instances. Ideally, automatically via the post_install script option of AWS ParallelCluster.
However, it seems like this requires a lot of setup, as is described in this tutorial by AWS:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/
This is not feasible for me. Clients should not have to create IAM roles.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
export AWS_ACCESS_KEY_ID=<key-id>
export AWS_SECRETE_ACCESS_KEY=<secret-key>
aws s3 cp s3://<bucket> . --recursive
Unfortunately, this is also not ideal as I would like to provide ready-to-use AWS Parallelcluster post_install scripts. These scripts should automatically download the required data on cluster startup.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
Yes. It's a 2 step process. In summary:
1) On your side, the bucket must trust the account id of the other accounts that will access it, and you must decide which calls you will allow.
2) On the other accounts that will access the bucket, the instances must be authorised to run AWS API calls on your bucket using IAM policies.
In more detail:
Step 1: let's work through this and break it down.
On your bucket, you'll need to configure a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID_TO_TRUST:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
You can find more examples of bucket policies in the AWS documentation here.
WARNING 1: "arn:aws:iam::ACCOUNT_ID:root" will trust everything that has permissions to connect to your bucket on the other AWS account. This shouldn't be a problem for what you're trying to do, but it's best you completely understand how this policy works to prevent any accidents.
WARNING 2: Do not grant s3:* - you will need to scope down the permissions to actions such as s3:GetObject etc. There is a website to help you generate these policies here. s3:* will contain delete permissions which if used incorrectly could result in nasty surprises.
Now, once that's done, great work - that's things on your end covered.
Step 2: The other accounts that want to read the data will have to assign an instance role to the ec2 instances they launch and that role will need a policy attached to it granting access to your bucket. Those instances can then run AWS CLI commands on your bucket, provided your bucket policy authorises the call on your side and the instance policy authorises the call on their side.
The policy that needs to be attached to the instance role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
Keep in mind, just because this policy grants s3:* it doesn't mean they can do anything on your bucket, not unless you have s3:* in your bucket policy. Actions of this policy will be limited to whatever you've scoped the permissions to in your bucket policy.
This is not feasible for me. Clients should not have to create IAM roles.
If they have an AWS account it's up to them on how they choose to access the bucket as long as you define a bucket policy that trusts their account the rest is on them. They can create an ec2 instance role and grant it permissions to your bucket, or an IAM User and grant it access to your bucket. It doesn't matter.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
If the code will run on an ec2 instance, it's bad practice to use access keys and instead should use an ec2 instance role.
Ideally, automatically via CloudFormation on instance startup.
I think you mean via instance userdata, which you can define through CloudFormation.
You say "Clients should not have to create IAM roles". This is perfectly correct.
I presume that you are creating the instances for use by the clients. If so, then you should create an IAM Role that has access to the desired bucket.
Then, when you create an Amazon EC2 instance for your clients, associate the IAM Role to the instance. Your clients will then be able to use the AWS Command-Line Interface (CLI) to access the S3 bucket (list, upload, download, or whatever permissions you put into the IAM Role).
If you want the data to be automatically downloaded when you first create their instance, then you can add User Data script that will execute when the instance starts. This can download the files from S3 to the instance.

Issue binding API Gateway to DynamoDB

I'm trying to create a simple ApiGateway on top of a DynamoDB to add a endpoint for users to access the data trough this.
Integration type AWS Service
AWS Region eu-west-1
AWS Service DynamoDB
AWS Subdomain
HTTP method GET
Action ListResources
Execution role [iam arn]
Credentials cache Do not add caller credentials to cache key
Content Handling Passthrough
When I click the test Button i get :
Execution failed due to configuration error: API Gateway does not have permission to assume the provided role
Checked here and there but have no clue on the problem. I tried changing the permissions of the IAM user and gave him all Dynamo and APIGateway rights, but no change.
It seems the issue is linked to the fact that I used a IAM user instead of an IAM Role. I'll leave that here, maybe that will help.
First, update the execution role to use a role rather than an IAM user. Then, ensure that the role has permissions for all of the DynamoDB operations and resources that you want to access. Finally, grant API Gateway permissions to assume that role by adding an IAM trust policy as shown below.
From section "API Gateway Permissions Model for Invoking an API" on documentation page here
When an API is integrated with an AWS service (for example, AWS Lambda) in the back end, API Gateway must also have permissions to access integrated AWS resources (for example, invoking a Lambda function) on behalf of the API caller. To grant these permissions, create an IAM role of the Amazon API Gateway type. This role contains the following IAM trust policy that declares API Gateway as a trusted entity that is permitted to assume the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}