Grant permissions between AWS resources with CloudFormation - amazon-web-services

I would like to have a CloudFormation template create an EC2 instance and give that instance access to a S3 bucket.
One way is to have the template create an IAM user with proper permissions and use its access key to grant access.
But what if I don't want to give that user access to the IAM service?
Is there a way to have that user deploy this template without IAM?
UPDATE:
I want to be able to just share that template, so I am wondering if it is possible to not have a dependency on pre-existing IAM resources (roles, policies, etc)

The common method to grant permissions for an instance is Instance Profiles. You create a role with all the required permissions, assign that role to an instance profile and then assign the profile to any instance you need.
You can do this with CloudFormation:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myEC2Instance": {
"Type": "AWS::EC2::Instance",
"Version": "2009-05-15",
"Properties": {
"ImageId": "ami-205fba49",
"InstanceType": "t2.micro",
"IamInstanceProfile": {
"Ref": "RootInstanceProfile"
}
}
},
"MyRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "ec2.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/"
}
},
"RolePolicies": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "s3",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":["arn:aws:s3:::examplebucket/*"],
} ]
},
"Roles": [ { "Ref": "MyRole" } ]
}
},
"RootInstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [ { "Ref": "MyRole" } ]
}
}
}
}
If you want to avoid giving the user deploying this template IAM access, you can create the instance profile before deploying the template and specify the already existing instance profile in the template. I haven't tried that yet, but it seems that should only require ec2:AssociateIamInstanceProfile and you should be able to constrain that just to that one specific profile.

Depends on what you mean by IAM service.
You can create IAM User Access Keys that give permissions to specific AWS services and no others. Access Keys do not allow IAM Console Access (this requires login credentials or federation).
For your use case your user will need at a minimum:
Permission to use CloudFormation to execute your template.
Permission to create the EC2 instance.
These permissions are defined in a policy that you add to the IAM user in the AWS Management Console. You can create users that cannot log into the console. Then you generate the Access Keys that the user will use in their application, AWS CLI, etc.
Overview of IAM Policies

Related

Cloudformation template to attach existing policy to existing IAM role

I want to attach an aws managed policy to an existing role. I am achieving this using template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation template to modify Role",
"Parameters": {
"MyRole": {
"Type": "String",
"Default": "MyRole",
"Description": "Role to be modified"
}
},
"Resources": {
"S3FullAccess": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}]
},
"Roles": [
"MyRole"
]
}
}
}
}
This template will create a policy with s3FullAccess and attach it to MyRole. But I do not want to create a new policy, if I want to use the policy already present with aws for s3 full access, how can I do that.
And if I use this template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation template to modify Role",
"Resources": {
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/ReadOnlyAccess"
],
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "*"
}
}]
},
"RoleName": "RoleName"
}
}
}
}
This will attempt to create a new role and attach ReadOnlyPolicy to it. But if I want to attach a policy to existing role, how to refer that role in the template.
You use your AWS::IAM::Role's ManagedPolicyArns property, where you just specify the ARN of the manage policy to attach.
To use existing role in CloudFormation, you have to import it. Then you will be able to manage it from CloudFormation.
In general, CloudFormation service is for creating resources. There is not a native support to do something with already created resources if you don't import them.
If you don't want to import them, then, you have an option to write CloudFormation custom resource. You can create a lambda function-backed custom resource passing in the ARNs of the IAM policy and the IAM role you want to attach the policy to by IAM AttachRolePolicy API. More details are in AWS documentation.

What is wrong with this AWS EFS policy?

I'm pretty new at working with AWS and I'm just experimenting and trying to learn. So I have an EC2 instance with an IAM role attached. I also have an EFS filesystem with the below policy in place. My intent was to restrict mounting the access point to EC2 instances with the IAM role attached.
But when I try to mount from the EC2 instance I get access denied.
mount.nfs4: access denied by server while mounting 127.0.0.1:
If I change the principal to "AWS" : "*" I can mount the access point. According to the docs I can specify the IAM role used by the EC2 instance as the principal but it doesn't seem to work.
I suspect my problem is somehow with the role I have attached to the EC2 instance. The role has EFS client actions but when I look at the role in the IAM console and check access adviser, it says the role is never accessed. So I may be doing something fundamentally wrong.
{
"Version": "2020-08-08",
"Id": "access-point-www",
"Statement": [
{
"Sid": "access-point-webstorage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678:role/wwwservers"
},
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite"
],
"Resource": "arn:aws:elasticfilesystem:us-east-1:12345678:file-system/fs-987654da",
"Condition": {
"StringEquals": {
"elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:us-east-1:12345678:access-point/fsap-01ffffbfb38217bcd"
}
}
}
]
}
Did you enable IAM mounting? Otherwise AWS tries to mount the EFS volume as a anonymous principle.
For EC2, like your case, you might just provide -o iam as option to your call to mount.
See: https://docs.amazonaws.cn/en_us/efs/latest/ug/efs-mount-helper.html#mounting-IAM-option
For ECS/task definitions this can be done this way:
Like this here:
aws_ecs_task_definition.volume.efs_volume_configuration.authorization_config?
resource "aws_ecs_task_definition" "service" {
family = "something"
container_definitions = file("something.json")
volume {
name = "service-storage"
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs[0].id
root_directory = "/"
transit_encryption = "ENABLED"
authorization_config {
iam = "ENABLED"
}
}
}
}
iam - (Optional) Whether or not to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the EFSVolumeConfiguration. Valid values: ENABLED, DISABLED. If this parameter is omitted, the default value of DISABLED is used.
This will help you if you have errors in your CloudTrail that an anonymous principal tries to mount your EFS. Errors would look something like this then:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AWSAccount",
"principalId": "",
"accountId": "ANONYMOUS_PRINCIPAL"
},
"eventSource": "elasticfilesystem.amazonaws.com",
"eventName": "NewClientConnection",
"sourceIPAddress": "AWS Internal",
"userAgent": "elasticfilesystem",
"errorCode": "AccessDenied",
"readOnly": true,
"resources": [
{
"accountId": "XXXXXX",
"type": "AWS::EFS::FileSystem",
"ARN": "arn:aws:elasticfilesystem:eu-west-1:XXXXXX:file-system/YYYYYY"
}
],
"eventType": "AwsServiceEvent",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "XXXXXX",
"sharedEventID": "ZZZZZZZZ",
"serviceEventDetails": {
"permissions": {
"ClientRootAccess": false,
"ClientMount": false,
"ClientWrite": false
},
"sourceIpAddress": "nnnnnnn"
}
}
Note: "principalId": "", and "accountId": "ANONYMOUS_PRINCIPAL"

How can I reference an existing role in my new CloudFormation template?

In my AWS account, I am building a new Cloudformation template that creates new policies, and I want to attach those to a few existing roles in the account.
Here is how I have been trying to reference them:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Names of existing roles to which the restrictive policies need added",
"Parameters": {
"AdditionalExecutionRoleParameter": {
"Type": "AWS::IAM::Role",
"Default": "CloudOps"
}
},
"Resources": { (and so on)
Then down in the section below the new policies, I have been trying to reference these existing roles ("AdditionalExecutionRoleParameter" in this case) and attach the policies to them using the Roles parameter. However, I keep getting a "failed to retrieve external values" error when trying to deploy the CloudFormation template... I've tried inputting "CloudOps", which is the role name, as the parameter "Default", and I've also tried inputting the role ARN there... nothing is working.
Not all AWS resource types are supported in the parameter type field.
The full list is at AWS-Specific Parameter Types. It includes, for example:
AWS::EC2::VPC::Id
List<AWS::EC2::VPC::Id>
AWS::SSM::Parameter::Name
This list does not include AWS::IAM::Role (or any IAM resources).
If you're simply trying to associate a new IAM policy with an existing named IAM role, then note that the AWS::IAM::Policy construct has a Roles property and you should supply a list of role names to apply the policy to. It requires role names, not role ARNs, so you don't need the account ID.
If you do ever need the account ID then it's available as a pseudo-parameter and you can get it from "Ref" : "AWS::AccountId", but I don't think you need it here.
Here's a JSON example of how to create a new IAM policy (allowing s3:Get* on mybucket/*) and associate it with an existing IAM role, whose name you supply as a parameter to the stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Add policy to role test",
"Parameters": {
"TheRoleName": {
"Type": "String",
"Default": "CloudOps",
"Description": "Name of role to associate policy with"
}
},
"Resources": {
"ThePolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "s3-read",
"PolicyDocument": {
"Statement": {
"Effect": "Allow",
"Action": [
"s3:Get*"
],
"Resource": ["arn:aws:s3:::mybucket/*"]
}
},
"Roles": [
{
"Ref": "TheRoleName"
}
]
}
}
}
}
Well... what I ended up doing is something as simple as this, which works fine...
"Parameters": {
"RoleNameRoleParameter": {
"Type": "String",
"Default": "RoleNameRole"

AWS Codepipeline with a Codecommit targetsource repository from another account

Is it possible to create a codepipeline that has a target source of a CodeCommit Repository in another account?
I just had to do this, I'll explain the process.
Account C is the account with your CodeCommit repository.
Account P is the account with your CodePipeline... pipelines.
In Account P:
Create an AWS KMS Encryption Key and add Account C with having access (guide here in pre-requisite step). You will also need to add the CodePipeline role, and if you have a CodeBuild and CodeDeploy step add those roles too.
In your CodePipeline artifacts S3 bucket you need to add Account C access. Go to the Bucket Policy and add:
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTC_ID:root"
},
"Action": [
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTC_ID:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME"
}
Change ACCOUNTC_ID to the account ID of Account C, and change YOUR_BUCKET_NAME to the CodePipeline artifact S3 bucket name.
Add a policy to your CodePipeline service role so you can get access to Account C and the CodeCommit repositories:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::ACCOUNTC_ID:role/*"
]
}
}
Again, change ACCOUNTC_ID to the account ID of Account C.
In Account C:
Create an IAM Policy that lets Account P to access the CodeCommit resources and also the KMS key so it can encrypt them with the same key as the rest of your CodePipeline:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject*",
"s3:PutObject",
"s3:PutObjectAcl",
"codecommit:ListBranches",
"codecommit:ListRepositories"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME_IN_ACCOUNTP_FOR_CODE_PIPELINE/*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:YOUR_KMS_ARN"
]
}
]
}
Replace bucket name and KMS ARN in the above policy. Save the policy as something like CrossAccountPipelinePolicy.
Create a role for cross account access and attach the above policy as well as the AWSCodeCommitFullAccess policy. Make sure to make the Trusted entity as the account ID of Account P.
In AWS CLI
You can't do this bit in the console so you have to use the AWS CLI. This will be to get your CodePipeline in AccountP to assume the role in the Source step and dump it in the S3 bucket for all your next steps to use.
aws codepipeline get-pipeline --name NameOfPipeline > pipeline.json
Modify the pipeline json so it looks a bit like this and replace the bits that you need to:
"pipeline": {
"name": "YOUR_PIPELINE_NAME",
"roleArn": "arn:aws:iam::AccountP_ID:role/ROLE_NAME_FOR_CODE_PIPELINE",
"artifactStore": {
"type": "S3",
"location": "YOUR_BUCKET_NAME",
"encryptionKey": {
"id": "arn:aws:kms:YOUR_KMS_KEY_ARN",
"type": "KMS"
}
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 1,
"roleArn": "arn:aws:iam::AccountC_ID:role/ROLE_NAME_WITH_CROSS_ACCOUNT_POLICY",
"configuration": {
"BranchName": "master",
"PollForSourceChanges": "false",
"RepositoryName": "YOURREPOSITORYNAME"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"inputArtifacts": []
}
]
},
Update the pipeline with aws codepipeline update-pipeline --cli-input-json file://pipeline.json
Verify it works by running the pipeline.
You can deploy resources using pipeline with codecommit repository in another account.
Let's say you have Account A where your codecommit repository sits, and Account B where you codepipeline sits.
Configure the following in account B:
You would need to create custom KMS key because AWS Default Key does not have an associated Key policy. You can use Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account if you need assistance with creating CMK. Add the Codepipeline service role to the KMS Key Policy to allow the codepipeline to use it.
Event bus for receiving events from cross account Go to CloudWatch → Event Buses under Events section → Add Permission → Enter DEV AWS Account Id → Add. For more details, check Creating an Event Bus
Add the following Policy to S3 pipeline Artifact store:
{
“Version”: “2012–10–17”,
“Id”: “PolicyForKMSAccess”,
“Statement”: [
{ “Sid”: “AllowAccessFromAAccount”,
“Effect”: “Allow”,
“Principal”: { “AWS”: “arn:aws:iam::ACCOUNT_A_ID:root” },
“Action”: [ “s3:Get*”, “s3:Put*”, "s3:ListBucket ],
“Resource”: “arn:aws:s3:::NAME-OF-THE-BUCKET/*” }
]
}
Edit the Pipeline IAM rols to assume role to Account A as follows:
{
“Version”:“2012–10–17”,
“Statement”:{
“Effect”:“Allow”,
“Action”:“sts:AssumeRole”,
“Resource”:[
“arn:aws:iam::ACCOUNT_A_ID:role/*
]
}
}
Create a CloudWatch Event Rule to trigger the pipeline on master branch of the CodeCommit in account A. Add CodePipeline's ARN as a target of this rule.
Now, do the following in Account A:
Create a cross account IAM role with 3 policies.
a) AWSCodeCommitFullAccess
b) Inline Policy to assume role to Account B as follows:
{
“Version”:“2012–10–17”,
“Statement”:[
{
“Effect”:“Allow”,
“Principal”:{
“AWS”:“arn:aws:iam::ACCOUNT_B_ID:root”
},
“Action”:“sts:AssumeRole”
}
]
}
c)Inline policy for KMS, CodeCommit and S3 access:
{
“Version”:“2012–10–17”,
“Statement”:[
{
“Effect”:“Allow”,
“Action”:[
“s3:Get*”,
“s3:Put*”,
“codecommit:*”
],
“Resource”:[
“arn:aws:s3:::YOUR_BUCKET_NAME_IN_B_FOR_CODE_PIPELINE_ARTIFACTS/”
]
},
{
“Effect”:“Allow”,
“Action”:[
“kms:*" ],
“Resource”: [ “arn:aws:kms:YOUR_KMS_ARN_FROM_B_ACCOUNT” ] } ] }
2. Update your pipeline as #Eran Medan suggested.
For more details, please visit AWS CodePipeline with a Cross-Account CodeCommit Repository
Also, please note that I have given a lot more permissions than required for example codecommit:* and kms:*, you can alter them as per your needs.
I hope this will help.
Yes, it should be possible. Follow these instructions: http://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

Access Denied from S3 Bucket [pendente]

In Account A I created a s3 bucket with cloudformation, and a CodeBuild builds an artifact and uploads to this bucket. In Account B I try to create a stack with cloudformation, and use the artifact from Account A's bucket to deploy my Lambda function. But, I get an Access Denied error. Does anyone know the solution? Thanks...
"TestBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"AccessControl": "BucketOwnerFullControl"
}
},
"IAMPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "TestBucket"
},
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
},
"/*"
]
]
},
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
}
]
]
}
]
}
]
}
}
}
Assuming that the xxxxx in below statement is the account number of Account B:
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
You are saying that this bucket grants the access to Account B on the basis of IAM permissions/policies held by them in Account B IAM service.
So essentially all the users/instance profile/policy that have explicit S3 access will be able to access this bucket of Account A. This means that perhaps the IAM policy that you are attaching to the lambda role in Account B doesn't have explicit S3 access.
I would suggest giving S3 access to your Lambda function and this should work.
Please be aware that in future if you want to write to S3 bucket of Account A from Account B, you would have to make sure that you put the bucket-owner-full-control acl so that the objects are available across all the accounts.
Example:
Using CLI:
$ aws s3api put-object --acl bucket-owner-full-control --bucket my-test-bucket --key dir/my_object.txt --body /path/to/my_object.txt
Instead of "arn:aws:iam::xxxxxxxxxxxx:root" granting access to the root role only, try granting access to all identities in the account by specifying just the account ID as the item within the Principal/AWS object: "xxxxxxxxxxxx".
See Using a Resource-based Policy to Delegate Access to an Amazon S3 Bucket in Another Account for more details.