Terraform | Secrets Manager | Reuse of existing secrets without deleting - amazon-web-services

I am creating Secrets in AWS using Terraform code. My Jenkins pipeline will create the infrastructure every 2 hours and destroys it. Once Infrastructure re-creates after 2 hours, it happened that, AWS Secrets is not allowing me to re-create again and throwing me with below error. Please suggest.
Error: error creating Secrets Manager Secret: InvalidRequestException: You can't create this secret because a secret with this name is already scheduled for deletion.
status code: 400, request id: e4f8cc85-29a4-46ff-911d-c5115716adc5
TF code:-
resource "aws_secretsmanager_secret" "secret" {
description = "${var.environment}"
kms_key_id = "${data.aws_kms_key.sm.arn}"
name = "${var.environment}-airflow-secret"
}
resource "random_string" "rds_password" {
length = 16
special = true
}
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = "${aws_secretsmanager_secret.secret.id}"
secret_string = <<EOF
{
"rds_password": "${random_string.rds_password.result}"
}
EOF
}
TF code plan output:-
# module.aws_af_aws_secretsmanager_secret.secret will be created
+ resource "aws_secretsmanager_secret" "secret" {
+ arn = (known after apply)
+ description = "dev-airflow-secret"
+ id = (known after apply)
+ kms_key_id = "arn:aws:kms:eu-central-1"
+ name = "dev-airflow-secret"
+ name_prefix = (known after apply)
+ recovery_window_in_days = 30
+ rotation_enabled = (known after apply)
}
# module.aws_af.aws_secretsmanager_secret_version.secret will be created
+ resource "aws_secretsmanager_secret_version" "secret" {
+ arn = (known after apply)
+ id = (known after apply)
+ secret_id = (known after apply)
+ secret_string = (sensitive value)
+ version_id = (known after apply)
+ version_stages = (known after apply)
}

You need to set the recovery window to 0 for immediate deletion of secrets.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret#recovery_window_in_days
recovery_window_in_days - (Optional) Specifies the number of days that AWS Secrets Manager waits before it can delete the secret. This value can be 0 to force deletion without recovery or range from 7 to 30 days. The default value is 30.

Related

How to create private S3 bucket with Terraform?

I want to create a private S3 bucket. If I create a private bucket with AWS console, it works. If I create a private bucket with Terrafom, the bucket is created, but it is always public.
Code
resource "aws_s3_bucket" "audio" {
bucket = "my-dev-audio"
}
resource "aws_s3_bucket_acl" "audio" {
bucket = aws_s3_bucket.audio.id
acl = "private"
}
Logs
[...]
# aws_s3_bucket_acl.audio will be created
+ resource "aws_s3_bucket_acl" "audio" {
+ acl = "private"
+ bucket = (known after apply)
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
Screenshots
In the list of buckets my bucket is shown as public.
In the permission tab of my bucket, the bucket is shown as public.
Environment
Terrerform 1.2.1
AWS Provider 4.45.0
Research
I read Resource: aws_s3_bucket_acl, but I can't see any difference between the exmaple and my code.
Question
How can I create a private S3 bucket with Terraform?
S3 buckets are private by default, s3 bucket acl is different from bucket privacy.
S3 bucket ACL

For Terraform's aws_eks_cluster, how can I use an existing VPC?

Terraform v1.2.8
I have a YAML configuration file that I've used to create an AWS EKS cluster via eksctl that uses an existing VPC, like this
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
name: sandbox
region: us-east-1
version: "1.23"
# The VPC and subnets are for the data plane, where the pods will
# ultimately be deployed.
vpc:
id: "vpc-123456"
clusterEndpoints:
privateAccess: true
publicAccess: false
subnets:
private:
...
Then I do this to create the cluster
$ eksctl create cluster -f eks-sandbox.yaml
Now I want to use Terraform instead, so I looked at the aws_eks_cluster resource, and am doing this
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = var.iam_role_arn
vpc_config {
endpoint_private_access = true
endpoint_public_access = false
security_group_ids = var.sg_ids
subnet_ids = var.subnet_ids
}
}
...but the resource doesn't allow me to specify an existing VPC? Hence when I do a
$ terraform plan -out out.o
I see
# module.k8s_cluster.aws_eks_cluster.eks_cluster will be created
+ resource "aws_eks_cluster" "eks_cluster" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ name = "sandbox"
+ platform_version = (known after apply)
+ role_arn = "arn:aws:iam::1234567890:role/EKSClusterAdminRole"
+ status = (known after apply)
+ tags_all = (known after apply)
+ version = (known after apply)
+ kubernetes_network_config {
+ ip_family = (known after apply)
+ service_ipv4_cidr = (known after apply)
+ service_ipv6_cidr = (known after apply)
}
+ vpc_config {
+ cluster_security_group_id = (known after apply)
+ endpoint_private_access = true
+ endpoint_public_access = false
+ public_access_cidrs = (known after apply)
+ security_group_ids = (known after apply)
+ subnet_ids = [
+ "subnet-1234567890",
+ "subnet-2345678901",
+ "subnet-3456789012",
+ "subnet-4567890123",
+ "subnet-5678901234",
]
+ vpc_id = (known after apply)
}
}
See the vpc_id output? But I don't want it to create a VPC for me. I want to use an existing VPC, like in my YAML configuration file. Can do I use an existing VPC to create an AWS EKS cluster in Terraform? TIA
Unfortunately resource aws_eks_cluster doesn't have argument vpc_id, but has such attribute. When you specify subnet_ids it will be created in subnet's VPC, that you specified.

Failed to decode planned changes for aws_lb_listener

I’m trying to create a load balanced EC2 and part of that using aws_lb_listener, the plan gets created fine and is something around the lines of:
# aws_lb_listener.app will be created
+ resource "aws_lb_listener" "app" {
+ arn = (known after apply)
+ id = (known after apply)
+ load_balancer_arn = "arn:aws:elasticloadbalancing:xxx:xxx:loadbalancer/app/xxx-lb/xxx"
+ port = 80
+ protocol = "HTTP"
+ ssl_policy = (known after apply)
+ default_action {
+ order = (known after apply)
+ target_group_arn = "arn:aws:elasticloadbalancing:xxx:xxx:targetgroup/tf-xxx-lb/xxx"
+ type = "forward"
}
}
But when applying I get the error message:
Error: failed to decode planned changes for aws_lb_listener.app: error decoding ‘after’ value: an object with 10 attributes is required (9 given)
This is the actual definition of the listener:
resource "aws_lb_listener" "app" {
load_balancer_arn = aws_lb.app.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.blue.arn
}
}
This is the definition of the load balancer:
resource "aws_lb" "app" {
name = "xxx"
internal = false
load_balancer_type = "application"
subnets = module.vpc.public_subnets
security_groups = [module.lb_security_group.this_security_group_id]
}
Please help.
Hey discovered the problem, it was an issue of using old terraform (0.14.08) and aws provider versions (~3.0). Updating terraform to (0.14.10) and aws provider (3.36.0) solved the problem....
I think the error message could have been more helpful... :D

ssh key pair in terraform

Can you please tell me a way to pass key in terraform for ec2 spin up.
variable "public_path" {
default = "D:\"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "my_key"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair
security_groups = [ "${aws_security_group.test_sg.id}" ]
}
Error : Invalid value for "path" parameter: failed to read D:".
Bash: tree
.
├── data
│ └── key
└── main.tf
1 directory, 2 files
Above is what my file system looks like. I'm not on windows. You were passing the directory and thinking that key_name means it would find the name of your key in that directory. But the fuction file() has no idea what key_name is. That is a value local to the aws_key_pair resource. So make sure you give the file fuction the full path to the file.
Look below for my code. You also passed aws_key_pair.app_keypair to your aws_instance resource. But that's an object that contains several properties. You need to specify which property you want to pass. In this case aws_key_pair.app_keypair.key_name. This will cause aws to stand up an EC2 and then look for a key pair with the name in your code. Then it associates them together.
provider aws {
profile = "myprofile"
region = "us-west-2"
}
variable "public_path" {
default = "./data/key"
}
resource "aws_key_pair" "app_keypair" {
public_key = file(var.public_path)
key_name = "somekeyname"
}
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t1.micro"
key_name = aws_key_pair.app_keypair.key_name
}
Here is my plan output. You can see the key is getting injected correctly. This is the same key in the terraform docs, so it's safe to put in here.
Terraform will perform the following actions:
# aws_instance.web will be created
+ resource "aws_instance" "web" {
<...ommitted for stack overflow brevity...>
+ key_name = "somekeyname"
<...ommitted for stack overflow brevity...>
}
# aws_key_pair.app_keypair will be created
+ resource "aws_key_pair" "app_keypair" {
+ arn = (known after apply)
+ fingerprint = (known after apply)
+ id = (known after apply)
+ key_name = "somekeyname"
+ key_pair_id = (known after apply)
+ public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email#example.com"
}
Plan: 2 to add, 0 to change, 0 to destroy.

How can I find a right ID about an existing resource in AWS in order to import to Terraform resource?

I got a duplicate existing resource error when deploy to AWS via Terraform.
Error: Error creating IAM Role SecuritySearchAPITaskRole: EntityAlreadyExists: Role with name SecuritySearchAPITaskRole already exists.
status code: 409, request id: cf5ae1f4-de6a-11e9-a7b1-d3cdff4db013
on deploy/modules/ecs-fargate-service/iam.tf line 1, in resource "aws_iam_role" "task":
1: resource "aws_iam_role" "task" {
Based on above error, it is an existing IAM Role with the name SecuritySearchAPITaskRole. I think the solution is to import this resource to my local terraform state but how I can find out the resource ID I need to use. I am able to find this ROLE on AWS IAM console but it doesn't seem to have an ID. I also tried to run terraform plan which gives me:
+ resource "aws_iam_role" "task" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ecs-tasks.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ description = "Task role for the SecuritySearchAPI service"
+ force_detach_policies = false
+ id = (known after apply)
+ max_session_duration = 3600
+ name = "SecuritySearchAPITaskRole"
+ path = "/"
+ tags = {
+ "Application" = "Security Search"
+ "Client" = "IRESS"
+ "DataClassification" = "NoData"
+ "Name" = "SecuritySearchAPI Task Role"
+ "Owner" = "platform"
+ "Product" = "SharedServices"
+ "Schedule" = "False"
+ "Service" = "Search"
+ "TaggingStandardVersion" = "3"
}
+ unique_id = (known after apply)
}
And you can see the id =known after apply` is not created. How can I find the ID for IAM role?
Ok, I found out this doc https://www.terraform.io/docs/providers/aws/r/iam_role.html#import, I can use role name as the ID in terraform import command.