I attempted to manage my application versions in my terraform template by parameterising the name. This was an attempt to have a new application version created by our CI process whenever the contents of the application changed. This way in elasticbeanstalk i could keep a list of historic application versions so that i could roll back etc. This didnt work as the same application version was constantly updated and in effect i lost the history of all application versions.
resource "aws_elastic_beanstalk_application_version" "default" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
I then tried to parameterise the logical resource reference name, but this isnt supported by terraform.
resource "aws_elastic_beanstalk_application_version" "${var.build-number}" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Currently my solution is to manage my application versions outside of terraform which is disappointing as there are other associated resources such as the S3 bucket and permissions to worry about.
Am i missing something?
As far as Terraform is concerned you are just updating a single EB application version resource there. If you wanted to keep the previous versions around then you might need to try and increment the count of resources that Terraform is managing.
Off the top of my head you could try something like this:
variable "builds" = {
type = list
}
resource "aws_elastic_beanstalk_application_version" "default" {
count = "${length(var.builds)}"
name = "${var.eb-app-name}-${element(builds, count.index)}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Then if you have a list of builds it should create a new application version for each build.
Of course that could be dynamic in that the variable could instead be a data source that returns a list of all your builds. If a data source doesn't exist for it already you could write a small script that is used as an external data source.
Related
I use Terraform to manage resources of Google Cloud Functions. But while the inital deployment of the cloud function worked, further deploments with changed cloud function source code (the source archive sourcecode.zip) were not redeployed when I use terraform apply after updating the source archive.
The storage bucket object gets updated but this does not trigger an update/redeployment of the cloud function resource.
Is this an error of the provider?
Is there a way to redeploy a function in terraform when the code changes?
The simplified source code I am using:
resource "google_storage_bucket" "cloud_function_source_bucket" {
name = "${local.project}-function-bucket"
location = local.region
uniform_bucket_level_access = true
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = "./../../../sourcecode.zip"
}
resource "google_cloudfunctions_function" "test_function" {
name = "test_func"
runtime = "python39"
region = local.region
project = local.project
available_memory_mb = 256
source_archive_bucket = google_storage_bucket.cloud_function_source_bucket.name
source_archive_object = google_storage_bucket_object.function_source_archive.name
trigger_http = true
entry_point = "trigger_endpoint"
service_account_email = google_service_account.function_service_account.email
vpc_connector = "projects/${local.project}/locations/${local.region}/connectors/serverless-main"
vpc_connector_egress_settings = "ALL_TRAFFIC"
ingress_settings = "ALLOW_ALL"
}
You can append MD5 or SHA256 checksum of the content of zip to the bucket object's name. That will trigger recreation of cloud function whenever source code changes.
${data.archive_file.function_src.output_md5}
data "archive_file" "function_src" {
type = "zip"
source_dir = "SOURCECODE_PATH/sourcecode"
output_path = "./SAVING/PATH/sourcecode.zip"
}
resource "google_storage_bucket_object" "function_source_archive" {
name = "sourcecode.${data.archive_file.function_src.output_md5}.zip"
bucket = google_storage_bucket.cloud_function_source_bucket.name
source = data.archive_file.function_src.output_path
}
You can read more about terraform archive here - terraform archive_file
You might consider that as a defect. Personally, I am not so sure about it.
Terraform has some logic, when an "apply" command is executed.
The question to think about - how does terraform know that the source code of the cloud function is changed, and the cloud function is to be redeployed? Terraform does not "read" the cloud function source code, does not compare it with the previous version. It only reads the terraform's script files. And if nothing is changed in those files (in comparison to the state file, and resources existed in GCP projects) - nothing to be redeployed.
Therefore, something is to be changed. For example the name of the archive file. In that case, terraform finds out that the cloud function has to be redeployed (because the state file has the old name of the archive object). The cloud function is redeployed.
An example of that code with more detailed explanation, was provided some time ago: don't take into account the question working - just read the answer
I have being changing an AWS canary code.
After running terraform apply, I see the updates in the new zip file but in AWS console the code is the old on.
What have I done wrong?
My terraform code:
resource "aws_synthetics_canary" "canary" {
depends_on = [time_sleep.wait_5_minutes]
name = var.name
artifact_s3_location = "s3://${local.artifacts_bucket_and_path}"
execution_role_arn = aws_iam_role.canary_role.arn
handler = "apiCanary.handler"
start_canary = true
zip_file = data.archive_file.source_zip.output_path
runtime_version = "syn-nodejs-puppeteer-3.3"
tags = {
Description = var.description
Entity = var.entity
Service = var.service
}
run_config {
timeout_in_seconds = 300
}
schedule {
expression = "rate(${var.rate_in_minutes} ${var.rate_in_minutes == 1 ? "minute" : "minutes"})"
}
}
I read this but it didn't help me.
I agree with #mjd2 but in the meantime worked around it by manually hashing the lambda source and embedding that hash into the source file name:
locals {
source_code = <whatever your source is>
source_code_hash = sha256(local.source_code)
}
data "archive_file" "canary_lambda" {
type = "zip"
output_path = "/tmp/canary_lambda_${local.source_code_hash}.zip"
source {
content = local.source_code
filename = "nodejs/node_modules/heartbeat.js"
}
}
This way, anytime the source_code is edited a new output filename will be used, triggering a replacement of the archive resource.
This could be a permission issue with your deployment role. Your role must have permission to modify the lambda behind the canary in order to apply the new layer that your zip file change creates.
Unfortunately any errors that occur when applying changes to the lambda are not communicated via terraform, or anywhere in the AWS console, but if it fails then your canary will continue to point to an old version of the lambda, without your code changes.
You should be able to see which version of the lambda your canary is using by checking the "Script location" field on the Configuration tab for your canary. Additionally, if you click in to the script location you will be able to see if you have newer, unpublished versions of the lambda layer available with your code changes in it.
To verify if the failure is a permission issue you need to query your canary via the AWS CLI.
Run aws synthetics get-canary --name <your canary name> and check the Status.StateReason.
If there was a permission issue when attempting to apply your change you should see something along the lines of:
<user> is not authorized to perform: lambda:UpdateFunctionConfiguration on resource: <lamdba arn>
Based on the above you should be able to add any missing permissions to your deployment roles iam policy and try your deployment again.
Hit the same issue. Seems like canary itself is a beta project that made it to production and the terraform resource that manages it also leaves much to be desired. There is no source_code_hash attr like with lambda, so you need to taint the entire canary resource so it gets recreated with any updated code. AWS Canary as of Nov 2022 is not mature at all. It should support integration with slack or at least AWS Chatbot out of the box, but it doesn't. Hopefully AWS team gives it some love because it's terrible as is right now in comparison to NewRelic, Dynatrace, and most other monitoring services that support synthetics.
When ever I am running a terraform plan using the following:
resource "google_sql_user" "users" {
name = "me"
instance = "${google_sql_database_instance.master.name}"
host = "me.com"
password = "changeme"
}
This only happens when I am running this a Postgres instance on Google Cloud SQL.
Terraform always outputs that the plan will create a user, even though it's already created. I'm using terraform version 0.11.1. Is there something that i'm missing? I tried setting the id value, however there it still recretes itself.
It turns out that the terraform state of the user was not being added, the cause seemed to be that when using postgres, the value host = "me.com" is not required, but causes problems if it's left there.
Once that was removed, then the terraform state will be correct.
The Terraform Data Sources documentation tells me what a data source is, but I do not quite understand it. Can somebody give me a use case of data source? What is the difference between it and configuring something using variables?
Data sources can be used for a number of reasons; but their goal is to do something and then give you data.
Let's take the example from their documentation:
# Find the latest available AMI that is tagged with Component = web
data "aws_ami" "web" {
filter {
name = "state"
values = ["available"]
}
filter {
name = "tag:Component"
values = ["web"]
}
most_recent = true
}
This uses the aws_ami data source - this is different than a resource! It will instead just give you information, and not create anything. This example in particular will call out to the describe-images AWS API call, pass in a few --filter options as specified, and return an object that you can get information from - take a look at these attributes!
name
owner_id
description
image_id
... The list goes on. This is really useful if I were, let's say - always wanting to pull the latest AMI matching some tags, and keep a launch configuration up to date with it. I could use this data provider rather than always have to update a variable or hard-code the ID.
Data source can be used for other reasons as well; one of my favorites is the template provider.
Good luck!
Data sources provide information about entities that are not managed by the current Terraform configuration.
This may include:
Configuration data from Consul
Information about the state of manually-configured infrastructure components
In other words, data sources are read-only views into the state of pre-existing components external to our configuration.
Once you have defined a data source, you can use the data elsewhere in your Terraform configuration.
For example, let's suppose we want to create a Terraform configuration for a new AWS EC2 instance. We want to use an AMI image which were created and uploaded by a Jenkins job using the AWS CLI, and not managed by Terraform. As part of the configuration for our Jenkins job, this AMI image will always have a name with the prefix app-.
In this case, we can use the aws_ami data source to obtain information about the most recent AMI image that has the name prefix app-.
data "aws_ami" "app_ami" {
most_recent = true
filter {
name = "name"
values = ["app-*"]
}
}
Data sources export attributes, just like resources do. We can interpolate these attributes using the syntax data.TYPE.NAME.ATTR. In our example, we can interpolate the value of the AMI ID as data.aws_ami.app_ami.id, and pass it as the ami argument for our aws_instance resource.
resource "aws_instance" "app" {
ami = "${data.aws_ami.app_ami.id}"
instance_type = "t2.micro"
}
Data sources are most powerful when retrieving information about dynamic entities - those whose properties change value often. For example, the next time Terraform fetches data for our aws_ami data source, the value of the exported attributes may be different (we might have built and pushed a new AMI).
Variables are used for static values, those that rarely changes, such as your access and secret keys, or a standard list of sudoers for your servers.
Good examples up there!
The main difference between Terraform data source, resource and variable is :
Resource: Provisioning of resources/infra on our platform. Create, Update and delete!
Variable Provides predefined values as variables on our IAC. Used by resource for provisioning.
Data Source: Fetch values from our infra/provider and and provides data for our resource to provision infra/resource.
Examples are well explained above :)
Data sources are used to fetch the data from the provider end, so that It can be used as configuration in .tf files, Instead of hardcoding it. Example: Below code fetches the AWS AMI ID and uses it to launch AWS instance.
data "aws_ami" "std_ami" {
most_recent = true
owners = ["amazon"]
filter {
name = "root-device-type"
values = ["ebs"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
resource "aws_instance" "myec2" {
ami = data.aws_ami.std_ami.id
instance_type = "t2.micro"
}
I have a module with in my terraform file that created some Database servers that does a few things.
First, it creates an auto scaling group to use a specific image, then it creates some EBS volumes and attaches them and then adds some lambda code so on launch the instances get registered to route 53. So in all about 80 lines of text.
Extract
module "systemt-sql-db01" {
source = "localmodules/tf-aws-asg"
name = "${var.envname}-sys-db01"
envname = "${var.envname}"
service = "dbpx"
ami_id = "${data.aws_ami.app_sqlproxy.id}"
user_data = "${data.template_cloudinit_config.config-enforcement-sqlproxy.rendered}"
#subnets = ["${module.subnets-enforcement.web_private_subnets}"]
subnets = ["${element(module.subnets-enforcement.web_private_subnets, 1)}"]
security_groups = ["${aws_security_group.unfiltered-egress-sg.id}", "${aws_security_group.sysopssg.id}", "${aws_security_group.system-sqlproxy.id}"]
key_name = "${var.keypair}"
load_balancers = ["${var.envname}-enf-dbpx-int-elb"]
iam_instance_profile = "${module.iam_profile_generic.profile_arn}"
instance_type = "${var.enforcement_instancesize_dbpx}"
min = 0
max = 0
}
And I then have two parameter files one that I call when launching to pre production and one called when launching to production. I don't want these to contain anything other than variables.
The problem is that for production I need to call the module twice, but for production I need it called three times.
People talk about a count function for modules but I don't think this is possible as yet. Can anyone suggest any other ways to do this? What I would like is to be able in my parameter file to set a list variable of all the DB ASG names, and then loop through this calling the module each time.
I hope that makes sense?
thank you
EDIT Looping in modules is in beta for Terraform 0.13 (https://discuss.hashicorp.com/t/terraform-0-13-beta-released/9555).
This is a highly requested feature in Terraform and as mentioned it is not yet supported. Later releases of Terraform v0.12 will introduce this feature (https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each).
I had a similar problem where I had to create multiple KMS keys for multiple accounts from a base KMS module. I ended up creating a second module that uses the core KMS module, this second module had many instances of the core module, but only required me to input the account details once.
This is still not ideal, but it worked well enough without over complicating things.