Context: I had issues with the logs in EC2 and am looking into expanding the volume for now while others check on how to address the root cause.
I'm able to add the storage, but now, am not sure how to properly configure it so the app can utilize the new volume for logging. The main goal is to expand the storage for application logging.
I'm using terraform to manage my AWS resources; modules to setup aws_elastic_beanstalk_environment with property solution_stack_name. In order to expand the storage, I added the following:
main.tf
setting {
resource = ""
namespace = "aws:autoscaling:launchconfiguration"
name = "BlockDeviceMappings"
value = var.volumeSize
}
vars.tf
volumeSize="/dev/sdj=:32:true:gp2"
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-autoscalinglaunchconfiguration
https://www.geeksforgeeks.org/how-to-attach-ebs-volume-in-ec2-instance/
Related
Overview
Currently, dashboards are being deployed via Terraform using values from a dictionary in locals.tf:
resource "aws_cloudwatch_dashboard" "my_alb" {
for_each = local.env_mapping[var.env]
dashboard_name = "${each.key}_alb_web_operational"
dashboard_body = templatefile("templates/alb_ops.tpl", {
environment = each.value.env,
account = each.value.account,
region = each.value.region,
alb = each.value.alb
tg = each.value.alb_tg
}
This leads to fragility because the values of AWS infrastructure resources like the ALB and ALB target group are hard coded. Sometimes when applying updates AWS resources are destroyed and recreated.
Question
What's the best approach to get these values dynamically? For example, this could be achieved by writing a Python/Boto3 Lambda, which looks up these values and then passes them to Terraform as env variables. Are there any other recommended ways to achieve the same?
It depends on how much environment is dynamical. But sounds like Terraform data sources is what you are looking for.
Usually, loadbalancer names are fixed or generated by some rule and should be known before creating dashboard.
Let's suppose that names are fixed, and names are:
variable "loadbalancers" {
type = object
default = {
alb01 = "alb01",
alb02 = "alb02"
}
}
In this case loadbalancers may be taken by:
data "aws_lb" "albs" {
for_each = var.loadbalancers
name = each.value # or each.key
}
And after that you will be able to get dynamically generated parameters:
data.aws_lb["alb01"].id
data.aws_lb["alb01"].arn
etc
If loadbalancer names are generated by some rule, you should use aws cli or aws cdk to get all names, or just generate names by same rule as it was generated inside AWS environment and pass inside Terraform variable.
Notice: terraform plan (apply, destroy) will raise error if you pass non-existent name. You should check if LB with provided name exists.
I do have a terraform script which provisions a Kubernetes deployment and a few clusterroles and clusterrolebingings via Helm.
But now I do need to edit one of the provisioned Clusterroles via Terraform and add another block of permissions. Is there a way to do this or would I need to recreate a similar resource freshly.
This is my block to create the respective deployment for efs-csi-driver.
resource "helm_release" "aws-efs-csi-driver" {
name = "aws-efs-csi-driver"
chart = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
version = "2.x.x"
namespace = "kube-system"
timeout = 3600
values = [
file("${path.module}/config/values.yaml"),
]
}
Somehow I do need to modify https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/45c5e752d2256558170100138de835b82d54b8af/deploy/kubernetes/base/controller-serviceaccount.yaml#L11 by adding a couple of more permission blocks. Is there a way that I can patch it out (Or completely overlay)
I have 2 directories:
aws/
k8s/
In the aws/ dir, I've provisioned an EKS cluster and EKS node group (among other things) using the Terraform AWS provider. That's been applied and everything looks good there.
When trying to then create a Kubernetes provider plan in k8s/ and create a Persistent Volume resource it requires the EBS volume ID.
Terraform Kubernetes Persistent Volume Resource
How do I get the EBS volume ID from the other .tfstate file from a Kubernetes provider plan?
So as I understand it, you want to reference resource from another state file. To do that you can use the following example:
data "terraform_remote_state" "aws_state" {
backend = "remote"
config = {
organization = "hashicorp"
workspaces = {
name = "state-name"
}
}
}
And once you have data resources available you can reference ebs volume in the following way:
data.terraform_remote_state.aws_state.outputs.ebs_volume_id
Remember to create an output called ebs_volume_id
AWS has recently launched support for storage autoscaling of RDS instances. We have multiple RDS instances with over provisioned storage in our production environment. We want to utilise this new feature to reduce some costs. Since we cannot reduce the storage of a live RDS instance, we will have to first create a RDS instance with less storage with autoscaling support and then migrate the existing data to new instance and then delete the old instance.
We use terraform with the terraform-aws-provider to create our infrastructure. Problem is that I am not able to achieve the above strategy using terraform.
Here is what i have tried :
Modify the existing RDS creation script to create two more
resources.
One is of type aws_db_snapshot and other is
aws_db_instance (using the snapshot).
However I get the following
error error modifying DB Instance (test-rds-snapshot):
InvalidParameterCombination: Invalid storage size for engine name
postgres and storage type gp2: 20.
# Existing RDS instance with over provisioned storage
resource "aws_db_instance" "test_rds"{
.
.
.
}
# My changes below
# The snapshot
resource "aws_db_snapshot" "test_snapshot" {
db_instance_identifier = "${aws_db_instance.test_rds.id}"
db_snapshot_identifier = "poc-snapshot"
}
# New instance with autoscale support and reduced storage
resource "aws_db_instance" "test_rds_snapshot" {
identifier = "test-rds-snapshot"
allocated_storage = 20
max_allocated_storage = 50
snapshot_identifier = "${aws_db_snapshot.test_snapshot.id}"
.
.
.
}
I want to know if I am on the right track or not and will I be able to migrate production databases using this strategy. Let me know if you need more information.
After running out of space I had to resize my EBS Volume, now I wanted to make the size part of my Terraform configurated and added the following block to the aws_instance resource:
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 32
volume_type = "gp2"
}
Now after running terraform plan it wanted to destroy the existing volume, which is terrible. I also tried to import the existing one using terraform import but it wanted me to use a different name for the resource which is also not great.
So what is the correct procedure here?
The aws_instance resource docs mention that changes to any EBS block devices will cause the instance to be recreated.
To get around this you can use something other than Terraform to grow the EBS volumes using AWS' new elastic volumes feature. Terraform also cannot detect changes to any of the attached block devices created in the aws_instance resource:
NOTE: Currently, changes to *_block_device configuration of existing resources cannot be automatically detected by Terraform. After making updates to block device configuration, resource recreation can be manually triggered by using the taint command.
As such you shouldn't need to go back and change anything in your Terraform configuration unless you are wanting to rebuild the instance using Terraform at some point at which point the worry about losing the instance is obviously moot.
However, if for some reason you want to be able to make the change to your Terraform configuration and keep the instance from being destroyed then you would need to manipulate your state file.