I have such problem with Terraform:
I need to create archive (I use data.archive_file with output_path = ./archive/s3/my_test_archive.zip) and then I wanna upload archive to aws bucket (using aws_s3_object).
To check hash sum I use
etag = filemd5("${data.archive_file.my_archive.output_path}")
But when I run terraform apply I get error
on module\main.tf line 20, in resource "aws_s3_object" "object":
: etag = filemd5("${data.archive_file.my_archive.output_path}")
├────────────────
│ while calling filemd5(path)
│ data.archive_file.my_archive.output_path is "./archive/s3/my_test_archive.zip"
│ Call to function "filemd5" failed: open archive/s3/my_test_archive.zip: The > system cannot find the file specified..```
I guess it's cause when Terraform is checking the code at the same moment file doesn't exist.
If I disable etag check this will work, but I need to check changes in archive.
How can I resolve it?
Related
I'm trying to understand why my data from my gcs backend is saying it does not have any outputs.
I have a module called DB which creates a postgres database.
I have a file called outputs.tf, where I have
terraform {
backend "gcs" {
bucket = "projectgun-terraform-state"
prefix = "db-workspaces"
}
}
I am using a workspace i called a1
I run terraform apply and viola, it worked, I created a DB.
Furthermore, when i go into GCS, I can find my bucket, and find my key. MY workspace name is a1, I have the prefix "db-workspaces", so my remote state is saved in #{my-bucket}/db-workspaces/a1.tfstate.
When I go to that key in my bucket I see a bunch of JSON that looks like this
If i go into my db module, and do terraform state pull it looks just like that also. Everything checks out.
But when I go to my other module, I try to access the outputs from GCS, and I can't.
I am using module a1.
data "terraform_remote_state" "db" {
backend = "gcs"
config = {
bucket = "projectgun-terraform-state"
prefix = "db-workspaces"
}
}
When i try to access this data via outputs, I see
79: db_user = data.terraform_remote_state.db.outputs.user
│ ├────────────────
│ │ data.terraform_remote_state.db.outputs is object with no attributes
│
│ This object does not have an attribute named "user".
What am I doing wrong? Is there a better way to debug my issue? How could I be sure what key terraform is looking at when it's attempting to pull the data?
Specifically
data.terraform_remote_state.db.outputs is object with no attributes
Can i debug data.terraform_remote_state ? How can i inspect what's going on here? There are very clearly outputs when i look at the remote state, so I feel like it's grabbing the wrong key, but don't know where to look.
I found a github issue that summarizes the issue I was having and a solution.
https://github.com/hashicorp/terraform/issues/24935
data "terraform_remote_state" "network" {
backend = "gcs"
workspace = terraform.workspace
config = {
bucket = "tf-state"
prefix = "base-layer/network/"
}
}
This does not seem to be a documented fix. Thank you to #HebertCL for the answer!
When I try to run terraform applyin my project, it throws the following error:
Error: Error Updating Kinesis Firehose Delivery Stream: "delivery"
│ InvalidArgumentException: Enabling or disabling Dynamic Partitioning is not supported at the moment
│
│ with module.shippeo-api.module.v1.aws_kinesis_firehose_delivery_stream.event-eta,
│ on ../../modules/api_gateway_v1/kinesis.tf line 12, in resource "aws_kinesis_firehose_delivery_stream" "event-eta":
│ 12: resource "aws_kinesis_firehose_delivery_stream" "event-eta" {
│
╵
because of this part:
resource "aws_kinesis_firehose_delivery_stream" "event-eta" {
name = local.firehose_delivery_stream
destination = "extended_s3"
extended_s3_configuration {
role_arn = var.integration_role_arn
#bucket_arn = aws_s3_bucket.jsonfiles.arn
bucket_arn = var.target_bucket_arn
prefix = "!{partitionKeyFromLambda:apiPath}/!{partitionKeyFromLambda:authorizerClientId}/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{timestamp:HH}/"
#prefix = "!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{timestamp:HH}/"
error_output_prefix = "error/!{timestamp:yyyy}/!{timestamp:MM}/!{timestamp:dd}/!{timestamp:HH}/!{firehose:error-output-type}"
dynamic_partitioning_configuration {
enabled = true
}
I already have the latest version in my provider.tf files:
required_providers {
archive = {
source = "hashicorp/archive"
version = "2.2.0"
}
aws = {
source = "hashicorp/aws"
version = "3.72.0"
}
}
However, when I check terraform versionon my terminal, I get this:
Terraform v1.0.7
on darwin_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v3.72.0
Your version of Terraform is out of date! The latest version
is 1.1.4. You can update by downloading from https://www.terraform.io/downloads.html
I already tried terraform init -upgradebut that didn't make a difference either.I also manually downloaded terraform's new version from the website but my terminal still shows 1.0.7
Yes, that's limitation at the moment. Currently, you can enable the dynamic partitioning only while creating a new delivery stream but not on existing delivery streams.
From AWS documentation:
Important You can enable dynamic partitioning only when you create a
new delivery stream. You cannot enable dynamic partitioning for an
existing delivery stream that does not have dynamic partitioning
already enabled.
It means, if you want to use the feature currently, you will have to create a new stream.
I am trying to deploy a Layer, with the size of 99MB, and I am getting this error.
│ Error: Error creating lambda layer: RequestEntityTooLargeException:
│ status code: 413, request id: 5a87d055-ba71-47bb-8c60-86d3b00e8dfc
│
│ with aws_lambda_layer_version.aa,
│ on layers.tf line 68, in resource "aws_lambda_layer_version" "aa":
│ 68: resource "aws_lambda_layer_version" "aa" {
This is the .tf
resource "aws_lambda_layer_version" "aa" {
filename = "custom_layers/aa/a.zip"
layer_name = "aa"
compatible_runtimes = ["python3.8"]
}
The zip is in the right location.
According to the AWS Lambda quotas, you cannot have a deployment package (.zip file archive) with size more than:
50 MB (zipped, for direct upload)
250 MB (unzipped)
This quota applies to all the files you upload, including layers and
custom runtimes.
3 MB (console editor)
There's also a paragraph in the AWS Lambda docs for your exact error:
General: Error occurs when calling the UpdateFunctionCode Error: An
error occurred (RequestEntityTooLargeException) when calling the
UpdateFunctionCode operation
When you upload a deployment package or layer archive directly to
Lambda, the size of the ZIP file is limited to 50 MB. To upload a
larger file, store it in Amazon S3 and use the S3Bucket and S3Key
parameters.
You should try to do one of the following:
Split your current lambda layer into multiple layers
Upload the layer zip to S3, and specify the Object in your terraform lambda config
I am trying to break down my main.tf file . So I have set aws config via terraform, created the configuration recorder and set the delivery channel to a s3 bucket created in the same main.tf file. Now for the AWS config rules, I have created a separate file viz config-rule.tf. As known , every aws_config_config_rule that we create has a depends_on clause where in we call the dependent resource, which in this case being aws_config_configuration_recorder. So my question is can I interpolate the depends_on clause to something like :
resource "aws_config_config_rule" "s3_bucket_server_side_encryption_enabled" {
name = "s3_bucket_server_side_encryption_enabled"
source {
owner = "AWS"
source_identifier = "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
}
depends_on = ["${file("aws-config-setup.tf")}"]
}
Considering I move my aws config setup from my main.tf file to a new file called aws-config-setup.tf.
If I'm reading your question correctly, you shouldn't need to make any changes for this to work, assuming you didn't move code to its own module (a separate directory).
When terraform executes in a particular directory it takes all files into account, basically treating them all as one terraform file.
So, in general, if you had a main.tf that looks like the following
resource "some_resource" "resource_1" {
# ...
}
resource "some_resource" "resource_2" {
# ...
depends_on = [some_resource.resource_1]
}
and you decided to split these out into the following files
file1.tf
resource "some_resource" "resource_1" {
# ...
}
file2.tf
resource "some_resource" "resource_2" {
# ...
depends_on = [some_resource.resource_1]
}
if terraform is run in the same directory, it will evaluate the main.tf scenario exactly the same as the multi-file scenario.
I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here:
uploaded version outputs as null. I would expect some version_id like 1, 2, 3
When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed. I would expect to upload all the times when I run terraform apply and create a new version.
What am I doing wrong? Here is my Terraform config:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my_bucket_name"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "my_files.zip"
}
output "my_bucket_file_version" {
value = "${aws_s3_bucket_object.file_upload.version_id}"
}
Terraform only makes changes to the remote objects when it detects a difference between the configuration and the remote object attributes. In the configuration as you've written it so far, the configuration includes only the filename. It includes nothing about the content of the file, so Terraform can't react to the file changing.
To make subsequent changes, there are a few options:
You could use a different local filename for each new version.
You could use a different remote object path for each new version.
You can use the object etag to let Terraform recognize when the content has changed, regardless of the local filename or object path.
The final of these seems closest to what you want in this case. To do that, add the etag argument and set it to be an MD5 hash of the file:
resource "aws_s3_bucket_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = "${path.module}/my_files.zip"
etag = "${filemd5("${path.module}/my_files.zip")}"
}
With that extra argument in place, Terraform will detect when the MD5 hash of the file on disk is different than that stored remotely in S3 and will plan to update the object accordingly.
(I'm not sure what's going on with version_id. It should work as long as versioning is enabled on the bucket.)
The preferred solution is now to use the source_hash property. Note that aws_s3_bucket_object has been replaced by aws_s3_object.
locals {
object_source = "${path.module}/my_files.zip"
}
resource "aws_s3_object" "file_upload" {
bucket = "my_bucket"
key = "my_bucket_key"
source = local.object_source
source_hash = filemd5(local.object_source)
}
Note that etag can have issues when encryption is used.
You shouldn't be using Terraform to do this. Terraform is supposed to orchestrate and provision your infrastructure and its configuration, not files. That said, terraform is not aware of changes on your files. Unless you change their names, terraform will not update the state.
Also, it is better to use local-exec to do that. Something like:
resource "aws_s3_bucket" "my-bucket" {
# ...
provisioner "local-exec" {
command = "aws s3 cp path_to_my_file ${aws_s3_bucket.my-bucket.id}"
}
}