I have successfully applied the following terraform code.
resource "aws_ssm_association" "webssmassoc" {
name = "arn:aws:ssm:eu-west-1:*********:document/a4s-bl-automation"
association_name = "${var.service_name}-dt-webserver-association"
parameters = {
AssumeRole = aws_iam_role.dt_automation_role.arn
InstanceId = data.aws_instance.webinstance.id
}
apply_only_at_cron_interval = true
schedule_expression = "cron(0 14 ? * ${local.dayOfWeek} *)"
}
I now make a small change in the schedule expression and run a terraform plan, terraform detects the change properly.
# aws_ssm_association.webssmassoc will be updated in-place
~ resource "aws_ssm_association" "webssmassoc" {
id = "0b9ee1a4-6011-4a9c-9055-2cca172b061e"
name = "arn:aws:ssm:eu-west-1:497882509041:document/a4s-oneagent-reboot-automation"
~ schedule_expression = "cron(0 14 ? * TUE *)" -> "cron(0 15 ? * TUE *)"
# (6 unchanged attributes hidden)
# (1 unchanged block hidden)
}
When I run terraform apply, it errors out.
│ Error: Error updating SSM association: ValidationException: Must specify both Automation Target Parameter Name and Targets
│ status code: 400, request id: e4e80ff6-1235-4355-9221-2031a1fb922d
│
│ with aws_ssm_association.webssmassoc,
│ on main.tf line 118, in resource "aws_ssm_association" "webssmassoc":
│ 118: resource "aws_ssm_association" "webssmassoc" {
Extra information:
The document being associated is an automation document.
provider being used is hashicorp/aws v4.33.0
terraform v1.1.5
Is this a terraform bug? Or is it working the way it is intended to?
Thanks in advance.
Related
I am trying to update a test AWS Transfer Server because I was unable to connect to it via SFTP
Now trying to use the FTP / FTPS protocols, I have used the same layout as the example here
This is the example in the docs
resource "aws_transfer_server" "example" {
endpoint_type = "VPC"
endpoint_details {
subnet_ids = [aws_subnet.example.id]
vpc_id = aws_vpc.example.id
}
protocols = ["FTP", "FTPS"]
certificate = aws_acm_certificate.example.arn
identity_provider_type = "API_GATEWAY"
url = "${aws_api_gateway_deployment.example.invoke_url}${aws_api_gateway_resource.example.path}"
}
And here is my code
resource "aws_transfer_server" "transfer_x3" {
tags = {
Name = "${var.app}-${var.env}-transfer-x3-server"
}
endpoint_type = "VPC"
endpoint_details {
vpc_id = data.aws_vpc.vpc_global.id
subnet_ids = [data.aws_subnet.vpc_subnet_pri_commande_a.id, data.aws_subnet.vpc_subnet_pri_commande_b.id]
}
protocols = ["FTP", "FTPS"]
certificate = var.certificate_arn
identity_provider_type = "API_GATEWAY"
url = "https://${aws_api_gateway_rest_api.Api.id}.execute-api.${var.region}.amazonaws.com/latest/servers/{serverId}/users/{username}/config"
invocation_role = data.aws_iam_role.terraform-commande.arn
}
And here is the error message
╷
│ Error: error creating Transfer Server: InvalidRequestException: Bad value in IdentityProviderDetails
│
│ with aws_transfer_server.transfer_x3,
│ on transfer-x3.tf line 1, in resource "aws_transfer_server" "transfer_x3":
│ 1: resource "aws_transfer_server" "transfer_x3" {
│
╵
My guess is, it doesn't like the value in the url parameter
I have tried using the same form as one provided in the example: url = "${aws_api_gateway_deployment.ApiDeployment.invoke_url}${aws_api_gateway_resource.ApiResourceServerIdUserUsernameConfig.path}", but encountered the same error message
I have tried ordering the parameters around if it was that, but I had the same error over and over when I use the command terraform apply
The commands terraform validate and terraform plan didn't show the error message at all
What value could the url parameter need? Or is there a parameter missing in my resource declaration?
As per the documentation (CloudFormation in this case) [1], the examples say the only thing needed is the invoke URL of the API Gateway:
.
.
.
"IdentityProviderDetails": {
"InvocationRole": "Invocation-Role-ARN",
"Url": "API_GATEWAY-Invocation-URL"
},
"IdentityProviderType": "API_GATEWAY",
.
.
.
Comparing that to the attributes provided by the API Gateway stage resource in terraform, the only thing that is needed is the invoke_url attribute [2].
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-transfer-server.html#aws-resource-transfer-server--examples
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/api_gateway_stage#invoke_url
Hello one of my modules for terraform bootstrap for GCP contains
resource "google_organization_iam_member" "organizationAdmin" {
for_each = toset(var.users)
org_id = var.organization_id
role = "roles/resourcemanager.organizationAdmin"
member = each.value
}
right now I'm getting
Error: Error retrieving IAM policy for organization "903021035085 ": googleapi: Error 400: Request contains an invalid argument., badRequest
│
│ with module.bootstrap_permissions.google_organization_iam_member.organizationAdmin["group:gcp-organization-admins#juliusoh.tech"],
│ on ../modules/bootstrap_permissions/main.tf line 1, in resource "google_organization_iam_member" "organizationAdmin":
│ 1: resource "google_organization_iam_member" "organizationAdmin" {
The account making the request has Owner permission at the Organization level, is there a reason why I am getting an error, when I do terraform plan.
The value of var.organization_id has a trailing space (see the error message), e.g., "123 " instead of "123". Remove this space and it should work.
I'm having problems creating a new EKS version 1.22 in a dev environment.
I'm using the module in Terraform registry, trimming some parts since it's only for testing purposes (we just want to test the version 1.22).
I'm using a VPC that was created for testing EKS's, and 2 public subnets and 2 private subnets.
This is my main.tf:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.21.0"
cluster_name = "EKSv2-update-test"
cluster_version = "1.22"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "vpc-xxx" # eks-vpc
subnet_ids = ["subnet-priv-1-xxx", "subnet-priv-2-xxx", "subnet-pub-1-xxx", "subnet-pub-2-xxx"]
}
Terraform apply times out after 20 min (it just hangs on module.eks.aws_eks_addon.this["coredns"]: Still creating... [20m0s elapsed])
and this is the error
│ Error: unexpected EKS Add-On (EKSv2-update-test:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'DEGRADED', timeout: 20m0s)
│ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration
│
│ with module.eks.aws_eks_addon.this["coredns"],
│ on .terraform/modules/eks/main.tf line 305, in resource "aws_eks_addon" "this":
│ 305: resource "aws_eks_addon" "this" {
The EKS gets created, but this is clearly not the way to go.
Regarding coredns, what am I missing?
Thanks
a minimum of 2 cluster nodes are required for addon coredns to meet its requirements for its replica set
I am building a lambda in terraform using it's AWS module and my code is as below:
module "lambda_function" {
# * Lambda module configs
source = "terraform-aws-modules/lambda/aws"
version = "3.0.0"
# * Lambda Configs
function_name = "${var.function_name}-${var.env}"
description = "My Project"
handler = local.constants.lambda.HANDLER
runtime = local.constants.lambda.VERSION
memory_size = 128
cloudwatch_logs_retention_in_days = 14
source_path = "./function/"
timeout = local.constants.lambda.TIMEOUT
create_async_event_config = true
maximum_retry_attempts = local.constants.lambda.RETRIES_ATTEMPT
layers = [
data.aws_lambda_layer_version.layer_requests.arn
]
environment_variables = {
AWS_ACCOUNT = var.env
SLACK_HOOK_CHANNEL = var.SLACK_HOOK_CHANNEL
}
tags = {
Name = "${var.function_name}-${var.env}"
}
trusted_entities = local.constants.lambda.TRUSTED_ENTITIES
}
This code works fine and the lambda get's deployed. Now i need to put the lambda in the VPC. When i add the code below in the resource block, i get the error error modifying Lambda Function (lambda_name) configuration : ValidationException: │ status code: 400, request id: de2641f6-1125-4c83-87fa-3fe32dee7b06 │ │ with module.lambda_function.aws_lambda_function.this[0], │ on .terraform/modules/lambda_function/main.tf line 22, in resource "aws_lambda_function" "this": │ 22: resource "aws_lambda_function" "this" {
The code for the vpc is:
# * VPC configurations
vpc_subnet_ids = ["10.21.0.0/26", "10.21.0.64/26", "10.21.0.128/26"]
vpc_security_group_ids = ["sg-ffffffffff"] # Using a dummy value here
attach_network_policy = true
If i use the same values in the AWS console and deploy the lambda in the VPC, it works fine.
Can someone please help ?
You have to provide valid subnet ids, not CIDR ranges. So instead of
vpc_subnet_ids = ["10.21.0.0/26", "10.21.0.64/26", "10.21.0.128/26"]
it should be
vpc_subnet_ids = ["subnet-asfid1", "subnet-asfid2", "subnet-as4id1"]
I've been able to deploy for months and now suddenly this morning I am getting this error.
│ Error: Error while updating cloudfunction configuration: Error waiting for Updating CloudFunctions Function: Error code 3, message: Build failed: curl: (22) The requested URL returned error: 404
│
│ gzip: stdin: unexpected end of file
│ tar: Child returned status 1
│ tar: Error is not recoverable: exiting now; Error ID: 637fe2a4
│
│ with google_cloudfunctions_function.syncFiles,
│ on functions.tf line 396, in resource "google_cloudfunctions_function" "syncFiles":
│ 396: resource "google_cloudfunctions_function" "syncFiles" {
│
This is the terraform configuration. We zip the directory and give this to cloud functions to deploy
data "archive_file" "source-zip" {
type = "zip"
source_dir = "${path.root}/../dist/"
output_path = "${path.root}/../dist/files/${var.app_name}.zip"
excludes = ["files/**"]
}
resource "google_storage_bucket_object" "deploy-zip" {
name = "${var.app_name}/${var.app_name}-${data.archive_file.source-zip.output_md5}.zip"
bucket = "${var.env_name}-deploy"
source = "${path.root}/../dist/files/${var.app_name}.zip"
depends_on = [data.archive_file.source-zip]
}
output "deploy_zip" {
value = google_storage_bucket_object.deploy-zip.name
}
What could cause this error?
Is this an internal problem?
I have a ticket open with Google support but nothing useful yet.
Please go to Cloud build, select your region, look at history/logs, that should tell you what is failing.
Possibly a package issue.