I do have a terraform script which provisions a Kubernetes deployment and a few clusterroles and clusterrolebingings via Helm.
But now I do need to edit one of the provisioned Clusterroles via Terraform and add another block of permissions. Is there a way to do this or would I need to recreate a similar resource freshly.
This is my block to create the respective deployment for efs-csi-driver.
resource "helm_release" "aws-efs-csi-driver" {
name = "aws-efs-csi-driver"
chart = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
version = "2.x.x"
namespace = "kube-system"
timeout = 3600
values = [
file("${path.module}/config/values.yaml"),
]
}
Somehow I do need to modify https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/45c5e752d2256558170100138de835b82d54b8af/deploy/kubernetes/base/controller-serviceaccount.yaml#L11 by adding a couple of more permission blocks. Is there a way that I can patch it out (Or completely overlay)
Related
Context: I had issues with the logs in EC2 and am looking into expanding the volume for now while others check on how to address the root cause.
I'm able to add the storage, but now, am not sure how to properly configure it so the app can utilize the new volume for logging. The main goal is to expand the storage for application logging.
I'm using terraform to manage my AWS resources; modules to setup aws_elastic_beanstalk_environment with property solution_stack_name. In order to expand the storage, I added the following:
main.tf
setting {
resource = ""
namespace = "aws:autoscaling:launchconfiguration"
name = "BlockDeviceMappings"
value = var.volumeSize
}
vars.tf
volumeSize="/dev/sdj=:32:true:gp2"
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-autoscalinglaunchconfiguration
https://www.geeksforgeeks.org/how-to-attach-ebs-volume-in-ec2-instance/
I have a requirement to add tags to all ECS services, task definition and tasks.
I just added below to add tags to my existing code. when new service is creating tags getting added and propagating which is good, But when I tried to add tags to existing ecs service its creating same service by destroying the existing one.
How to add tags without recreating existing ECS service, and should propagate tags to tasks when task rotated
tags = {
Name = local.name_env
name2 = local.name2
owner = var.sowner
env = var.env
}
propagate_tags = "SERVICE"
i think you can used terraform state command for this . just add the tag manually in ecs then add the same thing in terraform tfstate file it should work i guess .
I want to create a EKS cluster using Terraform, build custom docker images and then perform Kubernetes deployments on the created cluster via terraform. I want to perform all the tasks with a single terraform apply. But I see that the kubernetes provider needs the details of cluster on initialization itself. Is there a way I can achieve both cluster creation and deployment using a single terraform apply, so that once the cluster is created, the cluster details can be passed to Kubernetes provider and then the pods are deployed.
Please let me know how I can achieve this?
I am doing this with ease , below is the pseudo code , you just need to be careful with the way you are using depends_on attribute with resources and try to encapsulate as much as possible
Kubernetes Provider in a separate file like kubernetes.tf
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
module.eks.cluster_id
]
}
}
Assuming you've got the network setup, here I am relying on implicit dependencies rather than an explicit one.
module "eks" {
source = "./modules/ekscluster"
clustername = module.clustername.cluster_name
eks_version = var.eks_version
private_subnets = module.networking.private_subnets_id
vpc_id = module.networking.vpc_id
environment = var.environment
instance_types = var.instance_types
}
Creating k8s resources using the depends_on attribute.
resource "kubernetes_namespace_v1" "app" {
metadata {
annotations = {
name = var.org_name
}
name = var.org_name
}
depends_on = [module.eks]
}
I am creating GCP Cloud SQL instance using terraform with cross region Cloud SQL replica. I am testing the DR scenario as when DR happen I am promoting read replica to primary instance using glcoud API (as there is not settings/resource available in terraform to promote replica) as I am using gcloud command the promoted instance and state file is not in sync so later the promoted instance is not under terraform control.
Cross-region replica setups become out of sync with the primary right after the promotion is complete. Promoting a replica is done manually and intentionally. It is not the same as high availability, where a standby instance (which is not a replica) automatically becomes the primary in case of a failure or zonal outage. You can promote the read replica using gcloud and Google API manually. By doing both of these will make the instance out of sync with Terraform. So what you are looking for seems to be not available while promoting a replica in Cloud SQL.
As a workaround I would suggest you to promote the replica to primary outside of Terraform, and then try to import the resource back into state which would reset the state file.
Promoting an instance to primary is not supported by Terraform's Google Cloud Provider, but there is an issue (which you should upvote if you care) to add support for this to the provider.
Here's how to work around the lack of support in the meantime. Assume you have the following minimal setup: an instance, a database, a user, and a read replica:
resource "google_sql_database_instance" "instance1" {
name = "old-primary"
region = "us-central1"
database_version = "POSTGRES_14"
}
resource "google_sql_database" "db" {
name = "test-db"
instance = google_sql_database_instance.instance1.name
}
resource "google_sql_user" "user" {
name = "test-user"
instance = google_sql_database_instance.instance1.name
password = var.db_password
}
resource "google_sql_database_instance" "instance2" {
name = "new-primary"
master_instance_name = google_sql_database_instance.instance1.name
region = "europe-west4"
database_version = "POSTGRES_14"
replica_configuration {
failover_target = false
}
}
Steps to follow:
You promote the replica out of band, either using the Console or the gcloud CLI.
Next you manually edit the state file:
# remove the old read-replica state; it's now the new primary
terraform state rm google_sql_database_instance.instance2
# import the new-primary as "instance1"
terraform state rm google_sql_database_instance.instance1
terraform import google_sql_database_instance.instance1 your-project-id/new-primary
# import the new-primary db as "db"
terraform state rm google_sql_database.db
terraform import google_sql_database.db your-project-id/new-primary/test-db
# import the new-primary user as "db"
terraform state rm google_sql_user.user
terraform import google_sql_user.user your-project-id/new-primary/test-user
Now you edit your terraform config to update the resources to match the state:
resource "google_sql_database_instance" "instance1" {
name = "new-primary" # this is the former replica's name
region = "europe-west4" # this is the former replica's region
database_version = "POSTGRES_14"
}
resource "google_sql_database" "db" {
name = "test-db"
instance = google_sql_database_instance.instance1.name
}
resource "google_sql_user" "user" {
name = "test-user"
instance = google_sql_database_instance.instance1.name
password = var.db_password
}
# this has now been promoted and is now "instance1" so the following
# block can be deleted.
# resource "google_sql_database_instance" "instance2" {
# name = "new-primary"
# master_instance_name = google_sql_database_instance.instance1.name
# region = "europe-west4"
# database_version = "POSTGRES_14"
#
# replica_configuration {
# failover_target = false
# }
# }}
}
Then you run terraform apply and see that only the user is updated in-place with the existing password. (This is done because Terraform can't get the password from the API and it was removed as part of the promotion and so has to be re-applied for Terraform's sake.)
What you do with your old primary is up to you. It's no longer managed by terraform. So either delete it manually, or re-import it.
Caveats
Everyone's Terraform setup is different and so you'll probably have to iterate through the steps above until you reach the desired result.
Remember to use a testing environment first with lots of calls to terraform plan to see what's changing. Whenever a resource is marked for deletion, Terraform will report why.
Nonetheless, you can use the process above to work your way to a terraform setup that reflects a promoted read replica. And in the meantime, upvote the issue because if it gets enough attention, the Terraform team will prioritize it accordingly.
I'm creating three EKS clusters using this module. Everything works fine, just that when I try to add the configmap to the clusters using map_roles, I face an issue.
My configuration looks like this which I have it within all three clusters
map_roles = [{
rolearn = "arn:aws:iam::${var.account_no}:role/argo-${var.environment}-${var.aws_region}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_1}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_2}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
}
]
The problem occurs while applying the template. It says
configmaps "aws-auth" already exists
When I studied the error further I realised that when applying the template, the module creates three configmap resources of the same name like these
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
This obviously is a problem. How do I fix this issue?
The aws-auth configmap is created by EKS, when you create a managed node pool. It has the configuration required for nodes to register with the control plane. If you want to control the contents of the configmap with Terraform you have two options.
Either make sure you create the config map before the managed node pools resource. Or import the existing config map into the Terraform state manually.
I've now tested my solution, which expands on #pst's "import aws-auth" answer and looks like this: break up the terraform apply operation in your main eks project into 3 steps, which completely isolate the eks resources from the k8s resources, so that you may manage the aws-auth ConfigMap from terraform workflows.
terraform apply -target=module.eks
This creates just the eks cluster and anything else the module creates.
The eks module design now guarantees this will NOT include anything from the kubernetes provider.
terraform import kubernetes_config_map.aws-auth kube-system/aws-auth
This brings the aws-auth map, generated by the creation of the eks cluster in the previous step, into the remote terraform state.
This is only necessary when the map isn't already in the state, so we first check, with something like:
if terraform state show kubernetes_config_map.aws-auth ; then
echo "aws-auth ConfigMap already exists in Remote Terraform State."
else
echo "aws-auth ConfigMap does not exist in Remote Terraform State. Importing..."
terraform import -var-file="${TFVARS_FILE}" kubernetes_config_map.aws-auth kube-system/aws-auth
fi
terraform apply
This is a "normal" apply which acts exactly like before, but will have nothing to do for module.eks. Most importantly, this call will not encounter the "aws-auth ConfigMap already exists" error since terraform is aware of its existence, and instead the proposed plan will update aws-auth in place.
NB:
Using a Makefile to wrap your terraform workflows makes this simple to implement.
Using a monolithic root module with -target is a little ugly, and as your use of the kubernetes provider grows, it makes sense to break out all the kubernetes terraform objects into a separate project. But the above gets the job done.
The separation of eks/k8s resources is best practice anyway, and is advised to prevent known race conditions between aws and k8s providers. Follow the trail from here for details.
I know that it's too late. But sharing a solution that I found.
We should use kubernetes_config_map_v1_data instead of kubernetes_config_map_v1. This resource allows Terraform to manage data within a pre-existing ConfigMap.
Example,
resource "kubernetes_config_map_v1_data" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
"mapRoles" = data.template_file.aws_auth_template.rendered
}
force = true
}
data "template_file" "aws_auth_template" {
template = "${file("${path.module}/aws-auth-template.yml")}"
vars = {
cluster_admin_arn = "${local.accounts["${var.env}"].cluster_admin_arn}"
}
}