I am trying to create an AWS VPC Endpoint Service (PrivateLink) where I can add Principals to those that already exist. Here is my current code
resource "aws_vpc_endpoint_service" "privatelink" {
provider = aws.customer
acceptance_required = true
network_load_balancer_arns = ["${aws_lb.nlb.arn}"]
}
resource "aws_vpc_endpoint_service_allowed_principal" "addition" {
provider = aws.customer
vpc_endpoint_service_id = aws_vpc_endpoint_service.privatelink.id
principal_arn = var.consumer_principal_arn
}
That works great for the one Principal specified in the variable but overwrites the existing Principal when I run it again with a different Principal. What I want is to append zero or more Principals to the list of existing Principals, each time I do a terraform apply. For example, the first time I run it, I specify Principal X. I run it again, specifying Principal Y. Now the list of allowed Principals is X and Y.
You would need to create multiple aws_vpc_endpoint_service_allowed_principal resources with each additional ARN. This way you can revoke principal(s) in the future without destroying other existing associations. Of course you can use for each loop and create aws_vpc_endpoint_service_allowed_principal resources with count and a list of principal ARNs. However, if you remove a principal from the list, associations for all the principals after the removed principal from the list will be recreated and the associations needs to be accepted again.
You can't edit the existing resource definition to add another principal. Terraform sees that as an update to the resource named "addition" and performs an update instead. Instead you need to add another aws_vpc_endpoint_service_allowed_principal resource.
Related
In an attempt to give an instance access to a specific folder in an s3 bucket, I've got this in a policy:
"Resource": "arn:aws:s3:::My_Bucket/db_backups/${aws:ResourceTag/Name}/*"
It doesn't work. Documentation for using tags like this is here: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_iam-tags.html
So perhaps what I'm trying to do is not possible.
But I'd rather not create a new role for each instance that needs access to a folder. Is there some other way I can pull this off?
You can use IAM policy elements: Variables and tags - AWS Identity and Access Management to write a single policy that applies to multiple IAM Users / IAM Roles.
As shown in that documentation, using a aws:userid variable will insert role-id:ec2-instance-id. Thus, the instances could be granted access to paths that match their role and instance, such as:
s3://bucketname/AROAU2DKSKXYQTOSDGTGX:i-abcd1234/*
The aws:ResourceTag is not defined for S3 resources. S3 only provides the tag as a policy variable when accessing objects and it is under the variable s3:ExistingObjectTag.
I had to do this for a recent engagement and one of things that made this difficult is that not all services supply their tags as a policy variable and those that do all use different names. The aws:ResourceTag variable is only provided if the resource you are accessing is KMS and a few other services.
Regardless, I'm not sure if your statement will work. What I think you actually want is to use aws:PrincipalTag/Name — i.e. "Resource": "arn:aws:s3:::My_Bucket/db_backups/${aws:PrincipalTag/Name}/*". This will embed the Name tag of the IAM principal — user or role — that is being used to access the resource.
My objective is to be able to set up an IAM Role which can assume a role of a certain IAM user. After the creation of the role, I would like to come back later and modify this role by adding external IDs to establish a trust relationship. Let me illustrate with an example:
Let's say I want to create role:
resource "aws_iam_role" "happy_role" {
name = "happy-role"
assume_role_policy = data.aws_iam_policy_document.happy_assume_rule_policy.json
}
Let's also assume that happy_assume_role_policy looks something like:
data "aws_iam_policy_document" "happy_assume_role_policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.some_iam_user_arn]
}
}
}
Now, I will use the created role to create an external integration. But once I am done creating that integration, I want to go back to the role I originally created and modify it's assumed role policy. So now I want to add a condition to the assume role policy and make it look like:
data "aws_iam_policy_document" "happy_assume_role_policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.snowflake_iam_user_arn]
}
condition {
test = "StringEquals"
values = [some_integration.integration.external_id]
variable = "sts:ExternalId"
}
}
}
In other words, my workflow should be like:
Create role without assume conditions
Create an integration with that role
Take the ID from the created integration and go back to the created role and add a condition on it
Edit:
By "integration" I mean something like this. Once an Integration is created, there is an outputted ID, and then I need to take that ID and feed it back to the Assume Role I originally created. That should happen everytime I add a new integration.
I first tried to create two IAM roles, one for managing the integration creation, and another for managing the integration itself. That ran without circular reference errors; however, I was not able to establish a connection from the storage to the database, as it needs to be the same IAM Role creating and managing the integration.
This is what I have ended up doing (still not good for an accepted way to do it IMO). I created a role (with targeting) like:
resource "aws_iam_role" "happy_role" {
name = "happy-role"
assume_role_policy = data.aws_iam_policy_document.basic_policy.json
}
And used a basic assume role policy (without conditions). And then for the next run, I applied (without targeting) and it worked.
I followed the approach mentioned here, How to create a Snowflake Storage Integration with AWS S3 with Terraform?
As part of Storage Integration creation, just provide the role arn which is manually constructed without the resource is created. Terraform wont complain. Then create the role with the assume policy referring to the External Id and User ARN created by the Storage Integration
Here is the terraform code I have used to create a service account and bind a role to it:
resource "google_service_account" "sa-name" {
account_id = "sa-name"
display_name = "SA"
}
resource "google_project_iam_binding" "firestore_owner_binding" {
role = "roles/datastore.owner"
members = [
"serviceAccount:sa-name#${var.project}.iam.gserviceaccount.com",
]
depends_on = [google_service_account.sa-name]
}
Above code worked great... except it removed the datastore.owner from any other service account in the project that this role was previously assigned to. We have a single project that many teams use and there are service accounts managed by different teams. My terraform code would only have our team's service accounts and we could end up breaking other teams service accounts.
Is there another way to do this in terraform?
This of course can be done via GCP UI or gcloud cli without any issue or affecting other SAs.
From terraform docs, "google_project_iam_binding" is Authoritative. Sets the IAM policy for the project and replaces any existing policy already attached. That means that it replaces completely members for a given role inside it.
To just add a role to a new service account, without editing everybody else from that role, you should use the resource "google_project_iam_member":
resource "google_service_account" "sa-name" {
account_id = "sa-name"
display_name = "SA"
}
resource "google_project_iam_member" "firestore_owner_binding" {
project = <your_gcp_project_id_here>
role = "roles/datastore.owner"
member = "serviceAccount:${google_service_account.sa-name.email}"
}
Extra change from your sample: the use of service account resource email generated attribute to remove the explicit depends_on. You don't need the depends_on if you do it like this and you avoid errors with bad configuration.
Terraform can infer the dependency from the use of a variable from another resource. Check the docs here to understand this behavior better.
It's an usual problem with Terraform. Either you do all with it, or nothing. If you are between, unexpected things can happen!!
If you want to use terraform, you have to import the existing into the tfstate. Here the doc for the bindind, and, of course, you have to add all the account in the Terraform file. If not, the binding will be removed, but this time, you will see the deletion in the tf plan.
I want to attach one of the pre-existing AWS managed roles to a policy, here's my current code:
resource "aws_iam_role_policy_attachment" "sto-readonly-role-policy-attach" {
role = "${aws_iam_role.sto-test-role.name}"
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
Is there a better way to model the managed policy and then reference it instead of hardcoding the ARN? It just seems like whenever I hardcode ARNs / paths or other stuff like this, I usually find out later there was a better way.
Is there something already existing in Terraform that models managed policies? Or is hardcoding the ARN the "right" way to do it?
The IAM Policy data source is great for this. A data resource is used to describe data or resources that are not actively managed by Terraform, but are referenced by Terraform.
For your example, you would create a data resource for the managed policy as follows:
data "aws_iam_policy" "ReadOnlyAccess" {
arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
The name of the data source, ReadOnlyAccess in this case, is entirely up to you. For managed policies I use the same name as the policy name for the sake of consistency, but you could just as easily name it readonly if that suits you.
You would then attach the IAM policy to your role as follows:
resource "aws_iam_role_policy_attachment" "sto-readonly-role-policy-attach" {
role = "${aws_iam_role.sto-test-role.name}"
policy_arn = "${data.aws_iam_policy.ReadOnlyAccess.arn}"
}
When using values that Terraform itself doesn't directly manage, you have a few options.
The first, simplest option is to just hard-code the value as you did here. This is a straightforward answer if you expect that the value will never change. Given that these "canned policies" are documented, built-in AWS features they likely fit this criteria.
The second option is to create a Terraform module and hard-code the value into that, and then reference this module from several other modules. This allows you to manage the value centrally and use it many times. A module that contains only outputs is a common pattern for this sort of thing, although you could also choose to make a module that contains an aws_iam_role_policy_attachment resource with the role set from a variable.
The third option is to place the value in some location that Terraform can retrieve values from, such as Consul, and then retrieve it from there using a data source. With only Terraform in play, this ends up being largely equivalent to the second option, though it means Terraform will re-read it on each refresh rather than only when you update the module using terraform init -upgrade, and thus this could be a better option for values that change often.
The fourth option is to use a specialized data source that can read the value directly from the source of truth. Terraform does not currently have a data source for fetching information on AWS managed policies, so this is not an option for your current situation, but can be used to fetch other AWS-defined data such as the AWS IP address ranges, service ARNs, etc.
Which of these is appropriate for a given situation will depend on how commonly the value changes, who manages changes to it, and on the availability of specialized Terraform data sources.
I got a similar situation, and I don't want to use the arn in my terraform script for two reasons,
If we do it on the Web console, we don't really look at the arn, we search the role name, then attach it to the role
The arn is not easy to remember, looks like is not for human
I would rather use the policy name, but not the arn, here is my example
# Get the policy by name
data "aws_iam_policy" "required-policy" {
name = "AmazonS3FullAccess"
}
# Create the role
resource "aws_iam_role" "system-role" {
name = "data-stream-system-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
}
# Attach the policy to the role
resource "aws_iam_role_policy_attachment" "attach-s3" {
role = aws_iam_role.system-role.name
policy_arn = data.aws_iam_policy.required-policy.arn
}
The Cloudformation documentation describes the following relationships between the parts of an IAM Role specification:
Service (Lambda in my case)
has one or more
Role/s
which contain one or more
Policy/ies
which contain a
Policy Document
which contains one or more
Statement/s
which contains one or more
{Effect,[Action],Resource} objects
which specify one or more
Action/s
Suppose I want to give a [Role] permission to do an [Action]. How do I determine where in the above hierarchy the permission should be specified?
In my specific case, I want to add s3:GetObject to a role for a Lambda.
Should I
create a new Role?
create a new Policy in an existing Role?
add a new statement to an existing Policy?
add a new Action to an existing Statement (using Resource:'*') ?
Looking for guidance as to when each of the above would apply...
I guess you can do any of the following, the only requirement being your policy document for that role must contain the statement
s3:GetObject