I am currently reading the book called "Terraform-up-and-learning 2nd edition"
In the section introducing setting state file remotely, I faced a trouble.
this is the code that I ran.
terraform {
backend "s3" {
bucket = "my-state"
key = "workspaces-example/terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "my-lock"
encrypt = true
}
}
provider "aws" {
region = "ap-southeast-1"
profile = "my-test"
}
resource "aws_instance" "example" {
ami = "ami-02045ebddb047018b"
instance_type = "t2.micro"
}
prior to running this code, I made s3 and DDB using my AWS console by hand.
And
terraform init
terraform plan
works well.
But I can't find any state file in my AWS console.
Following the book, the state file should be in "my-state" s3.
In addition, I can't find any statefile in my local too.
The state file goes to the backend after you run terraform apply.
After that, when you run terraform init again it will look for the remote state file.
In your case, your plan does show the changes list but it did not output any files. You have to apply now.
manual approval -
terraform apply
auto -
terraform apply -auto-approve
Remember to destroy after playing around to save costs. Run terraform destroy
Best wishes.
Related
I have a microservice deployed in a docker container to manage and execute terraform commads to create infrastructure on AWS. The terraform template supported is as follows:
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_default_vpc" "default" {
tags = {
Name = "Default VPC"
}
}
resource "aws_security_group" "se_security_group" {
name = "test-sg"
description = "secure soft edge ports"
vpc_id = aws_default_vpc.default.id
tags = {
Name = "test-sg"
}
}
resource "aws_instance" "web" {
ami = "ami-*********"
instance_type = "t3.micro"
tags = {
Name = "test"
}
depends_on = [
aws_security_group.se_security_group,
]
}
With this system in place, while the terraform process is being executed (creating an EC2 instance),if the docker container crashes, then the state file would not have the entry regarding the EC2 resource being created. On container restart, if the terraform process is restarted on the same state file, it would end up creating a whole new EC2 instance resulting in a resource leak.
How is the crash scenario in terraform commonly handled?
Is there a way to rollback the previous transaction without the state file having the EC2 entry?
Please help me with this issue. Thanks
How is the crash scenario in terraform commonly handled?
It depends when did the crash happened. Some plausible scenarios are:
Most likely, your state file will remain locked, as long as your backend supports locking. In this case nothing will be created after restart, because Terraform wont be able to acquire a lock to the state file, so it will throw an error. We will have to force unlock the state.
We managed to unlock the state file/the state file was not locket at all. In this case we can have to following scenarios:
The state file will have an entry with an identifier for the resource, even if there was a crash will the resource was provisioning. In this case Terraform will refresh the state and will display in the plan if there are any changes to be made. Nevertheless, we should read the plan and decide if we would want to apply or do some manual adjustments first.
Terraform wont be able to identify a resource which already exists, so it will try to provision it. Again, we should read the state file and decide ourselves what to do. We can either import the already existing resource or terminate it and let Terraform attempt to create it again.
Is there a way to rollback the previous transaction without the state file having the EC2 entry?
No, there is no way to rollback to the previous transaction. Terraform will attempt to provision whatever it is in the .tf files. What we could do is to checkout a previous version of our code from our source control and apply that.
The idea is that I want to use Terraform resource aws_secretsmanager_secret to create only three secrets (not workspace-specified secret), one for the dev environment, one for preprod and the third one for production env.
Something like:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "example-secret-dev"
}
resource "aws_secretsmanager_secret" "preprod_secret" {
name = "example-secret-preprod"
}
resource "aws_secretsmanager_secret" "prod_secret" {
name = "example-secret-prod"
}
But after creating them, I don't want to overwrite them every time I run 'Terraform apply', is there a way to tell Terraform if any of the secrets exist, skip the creation of the secret and do not overwrite?
I had a look at this page but still doesn't have a clear solution, any suggestion will be appreciated.
It will not overwrite the secret if you create it manually in the console or using AWS SDK. The aws_secretsmanager_secret creates only the secret, but not its value. To set value you have to use aws_secretsmanager_secret_version.
Anyway, this is something you can easily test yourself. Just run your code with a secret, update its value in AWS console, and re-run terraform apply. You should see no change in the secret's value.
You could have Terraform generate random secret values for you using:
data "aws_secretsmanager_random_password" "dev_password" {
password_length = 16
}
Then create the secret metadata using:
resource "aws_secretsmanager_secret" "dev_secret" {
name = "dev-secret"
recovery_window_in_days = 7
}
And then by creating the secret version:
resource "aws_secretsmanager_secret_version" "dev_sv" {
secret_id = aws_secretsmanager_secret.dev_secret.id
secret_string = data.aws_secretsmanager_random_password.dev_password.random_password
lifecycle {
ignore_changes = [secret_string, ]
}
}
Adding the 'ignore_changes' lifecycle block to the secret version will prevent Terraform from overwriting the secret once it has been created. I tested this just now to confirm that a new secret with a new random value will be created, and subsequent executions of terraform apply do not overwrite the secret.
I provision AWS resources using Terraform using a python script that call terraform via shell
os.system('terraform apply')
The only way I found to enable terraform authentication, after enabling MFA and assuming a role, is to publish these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')
This worked OK until I configured s3 as backend, terraform action is performed but before the state can be stored in the bucket I get the standard (very confusing) exception:
Error: error configuring S3 Backend: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
I read this excellent answer explaining that for security and other reasons backend configuration is separate.
Since I don't want to add actual secret keys to source code (as suggested by the post) I tried adding a reference to the profile and when it failed I added the actual keys just to see if it would work, which it didn't.
My working theory is that behind the scenes terraform starts another process which doesn't access or inherit the credential e environment variables.
How do I use s3 backend, with an MFA assumed role?
One must point the backend to the desired profile. In my case the same profile used for the provisioning itself.
Here is a minimal POC
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
backend "s3" {
bucket = "unique-terraform-state-dev"
key = "test"
region = "us-east-2"
profile = "the_role_assumed_in_aws_credentials"
}
}
provider "aws" {
version = "~> 3.0"
region = var.region
}
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.bucket_name
}
I'm reminding that it's run by shell which has these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')
I would like to deploy a local zip file to Elastic Beanstalk using Terraform. I would also like to keep old versions of the application in S3, with some retention policy, such as keep for 90 days. If I rebuild the bundle, I would like Terraform to detect this and deploy the new version. If the hash of the bundle hasn't changed then Terraform should not change anything.
Here is (some of) my config:
resource "aws_s3_bucket" "application" {
bucket = "test-elastic-beanstalk-bucket"
}
locals {
user_interface_bundle_path = "${path.module}/../../build.zip"
}
resource "aws_s3_bucket_object" "user_interface_latest" {
bucket = aws_s3_bucket.application.id
key = "user-interface-${filesha256(local.user_interface_bundle_path)}.zip"
source = local.user_interface_bundle_path
}
resource "aws_elastic_beanstalk_application" "user_interface" {
name = "${var.environment}-user-interface-app"
}
resource "aws_elastic_beanstalk_application_version" "user_interface_latest" {
name = "user-interface-${filesha256(local.user_interface_bundle_path)}"
application = aws_elastic_beanstalk_application.user_interface.name
bucket = aws_s3_bucket_object.user_interface_latest.bucket
key = aws_s3_bucket_object.user_interface_latest.key
}
resource "aws_elastic_beanstalk_environment" "user_interface" {
name = "${var.environment}-user-interface-env"
application = aws_elastic_beanstalk_application.user_interface.name
solution_stack_name = "64bit Amazon Linux 2018.03 v4.15.0 running Node.js"
version_label = aws_elastic_beanstalk_application_version.user_interface_latest.name
}
The problem with this is that each time the hash of the bundle changes, it deletes the old object in S3.
How can I get Terraform to create a new aws_s3_bucket_object and not delete the old one?
This is related but I don't want to maintain build numbers Elastic Beanstalk Application Version in Terraform
Expanding on #Marcin comment...
You should enable bucket versioning and add a lifecycle rule to delete versions older than 90 days
Here is an example:
resource "aws_s3_bucket" "application" {
bucket = "test-elastic-beanstalk-bucket"
versioning {
enabled = true
}
lifecycle_rule {
id = "retention"
noncurrent_version_expiration {
days = 90
}
}
}
You can see more examples in the documentation:
https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-object-lifecycle
Then I would simplify your aws_s3_bucket_object since we have versioning we don't really need to do the filesha256 just use the original name build.zip and good to go.
If you don't want to enable bucket versioning another way would be to use the AWS CLI to upload the file before you call terraform or do it in a local-exec from a null_resource here are a couple of examples:
https://www.terraform.io/docs/provisioners/local-exec.html#interpreter-examples
I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.
I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "main.js"
output_path = "main.zip"
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = "s3-bucket-for-tft-project"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
key = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
source = "${data.archive_file.lambda_zip.output_path}"
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_zip_file.key}"
function_name = "lambda_test_with_s3_version"
role = "${aws_iam_role.lambda_exec.arn}"
handler = "main.handler"
runtime = "nodejs8.10"
}
I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.
Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.
The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.
Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:
data "aws_ssm_parameter" "lambda_artifact" {
name = "lambda_artifact"
}
locals {
# Let's assume that this SSM parameter contains a JSON
# string describing which artifact to use, like this
# {
# "bucket": "s3-bucket-for-tft-project",
# "key": "v2.0.0/example.zip"
# }
lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = local.lambda_artifact.bucket
s3_key = local.lambda_artifact.key
function_name = "lambda_test_with_s3_version"
role = aws_iam_role.lambda_exec.arn
handler = "main.handler"
runtime = "nodejs8.10"
}
This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:
To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.
A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.
This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.