I'm using Terraform 1.3.5 and this module previously worked flawlessly, until I renamed the module. Now I am getting this error:
Error: creating EventBridge Target (cleanup-terraform-20221130175229684800000001): ValidationException: RoleArn is required for target arn:aws:events:us-east-1:123456789012:api-destination/services-destination/c187090f-268b-4d9b-b09d-f9b077e0c0cf.
│ status code: 400, request id: 63dc6425-2a94-4f66-b7c2-106b0607d964
│
│ with module.a-eventbridge-trigger.aws_cloudwatch_event_target.api_destination,
│ on ..\a-eventbridge-trigger\main.tf line 61, in resource "aws_cloudwatch_event_target" "api_destination":
│ 61: resource "aws_cloudwatch_event_target" "api_destination" {
Here is the complete content of the main.tf in the module:
# configures api connection
resource "aws_cloudwatch_event_connection" "auth" {
name = "services-token"
description = "Gets oauth bearer token"
authorization_type = "OAUTH_CLIENT_CREDENTIALS"
auth_parameters {
oauth {
authorization_endpoint = "${var.vars.apiBaseUrl}${var.vars.auth}"
http_method = "POST"
client_parameters {
client_id = var.secretContent.Client_Id
client_secret = var.secretContent.Client_Secret
}
oauth_http_parameters {
body {
key = "grant_type"
value = "client_credentials"
is_value_secret = true
}
body {
key = "client_id"
value = var.secretContent.Client_Id
is_value_secret = true
}
body {
key = "client_secret"
value = var.secretContent.Client_Secret
is_value_secret = true
}
}
}
}
}
# configures api destination
resource "aws_cloudwatch_event_api_destination" "request" {
name = "services-destination"
description = "Requests clean up"
invocation_endpoint = "${var.vars.apiBaseUrl}${var.vars.endpoint}"
http_method = "POST"
invocation_rate_limit_per_second = 20
connection_arn = aws_cloudwatch_event_connection.auth.arn
}
# sets up the scheduling
resource "aws_cloudwatch_event_rule" "every_midnight" {
name = "${var.name}-services-cleanup"
description = "Fires on every day at midnight of UTC+0"
schedule_expression = "cron(0 0 * * ? *)"
is_enabled = true
}
# tells the scheduler to call the api destination
resource "aws_cloudwatch_event_target" "api_destination" {
rule = aws_cloudwatch_event_rule.every_midnight.name
arn = aws_cloudwatch_event_api_destination.request.arn
}
And the module is called like this from the root module:
module "a-eventbridge-trigger" {
source = "../a-eventbridge-trigger"
name = local.prefixName
resourceTags = local.commonTags
vars = var.vars
secretContent = var.secrets
}
Here is the providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.43.0"
}
}
backend "s3" {}
}
What am I missing and why would it stop working suddenly?
I have run a complete destroy and fresh apply but I still get this.
UPDATE
i got the variable working which passes the terraform plan flying colors. That said when i run terraform apply I get a new error:
creating CodePipeline (dev-mgt-mytest-cp): ValidationException: 2
validation errors detected: Value at
'pipeline.stages.1.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have length
greater than or equal to 1]; Value at
'pipeline.stages.2.member.actions.1.member.configuration' failed to
satisfy constraint: Map value must satisfy constraint: [Member must
have length less than or equal to 50000, Member must have a length
greater than or equal to 1]
I don't believe this is a limit for the code pipeline since I have done this pipeline manually without dynamic stages, and it works fine. Not sure if this is a terraform hard limit. Looking for some help here. Also, I have updated the code with the working variable for those looking for the syntax.
OLD POST
================================================================
I am giving my first stab at creating dynamic stages and really struggling with the documentation out there. What I have put together so far is based on articles found here in StackOverflow and a few resources online. So far I think i have good syntax, but the value i am passing is from my main.tf is getting an error:
The given value is not suitable for the module.test_code.var.stages
declared at │ ../dynamic_pipeline/variables.tf:60,1-18: all list
elements must have the same │ type.
Part 1
All I am trying to do basically is pass in dynamic stages into the pipeline. Once I get the stages working, I will add the new dynamic variables. I am providing the dynamic module, variables.tf for the module, and then my test run along with variables.
dynamic_pipeline.tf
resource "aws_codepipeline" "cp_plan_pipeline" {
name = "${local.cp_name}-cp"
role_arn = var.cp_run_role
artifact_store {
type = var.cp_artifact_type
location = var.cp_artifact_bucketname
}
dynamic "stage" {
for_each = [for s in var.stages : {
name = s.name
action = s.action
} if(lookup(s, "enabled", true))]
content {
name = stage.value.name
dynamic "action" {
for_each = stage.value.action
content {
name = action.value["name"]
owner = action.value["owner"]
version = action.value["version"]
category = action.value["category"]
provider = action.value["provider"]
run_order = lookup(action.value, "run_order", null)
namespace = lookup(action.value, "namespace", null)
region = lookup(action.value, "region", data.aws_region.current.name)
input_artifacts = lookup(action.value, "input_artifacts", [])
output_artifacts = lookup(action.value, "output_artifacts", [])
configuration = {
RepositoryName = lookup(action.value, "repository_name", null)
ProjectName = lookup(action.value, "ProjectName", null)
BranchName = lookup(action.value, "branch_name", null)
PollForSourceChanges = lookup(action.value, "poll_for_sourcechanges", null)
OutputArtifactFormat = lookup(action.value, "ouput_format", null)
}
}
}
}
}
}
variables.tf
#---------------------------------------------------------------------------------------------------
# General
#---------------------------------------------------------------------------------------------------
variable "region" {
type = string
description = "The AWS Region to be used when deploying region-specific resources (Default: us-east-1)"
default = "us-east-1"
}
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_name" {
type = string
description = "The name of the codepipline"
}
variable "cp_repo_name" {
type = string
description = "Then name of the repo that will be used as a source repo to trigger builds"
}
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_artifact_bucketname" {
type = string
description = "name of the artifact bucket where articacts are stored."
default = "Codepipeline-artifacts-s3"
}
variable "cp_run_role" {
type = string
description = "S3 artifact bucket name."
}
variable "cp_artifact_type" {
type = string
description = ""
default = "S3"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs"
default = "CODE_ZIP"
}
variable "stages" {
type = list(object({
name = string
action = list(object({
name = string
owner = string
version = string
category = string
provider = string
run_order = number
namespace = string
region = string
input_artifacts = list(string)
output_artifacts = list(string)
repository_name = string
ProjectName = string
branch_name = string
poll_for_sourcechanges = bool
output_format = string
}))
}))
description = "This list describes each stage of the build"
}
#---------------------------------------------------------------------------------------------------
# ENVIORNMENT VARIABLES
#---------------------------------------------------------------------------------------------------
variable "env" {
type = string
description = "The environment to deploy resources (dev | test | prod | sbx)"
default = "dev"
}
variable "tenant" {
type = string
description = "The Service Tenant in which the IaC is being deployed to"
default = "dummytenant"
}
variable "project" {
type = string
description = "The Project Name or Acronym. (Note: You should consider setting this in your Enviornment Variables.)"
}
#---------------------------------------------------------------------------------------------------
# Parameter Store Variables
#---------------------------------------------------------------------------------------------------
variable "bucketlocation" {
type = string
description = "location within the S3 bucket where the State file resides"
}
Part 2
That is the main makeup of the pipeline. Below is the module I created to try to execute as a test to ensure it works. This is where I am getting the error
main.tf
module test_code {
source = "../dynamic_pipeline"
cp_name = "dynamic-actions"
project = "my_project"
bucketlocation = var.backend_bucket_target_name
cp_run_role = "arn:aws:iam::xxxxxxxxx:role/cp-deploy-service-role"
cp_repo_name = var.repo
stages = [{
name = "part 1"
action = [{
name = "Source"
owner = "AWS"
version = "1"
category = "Source"
provider = "CodeCommit"
run_order = 1
repository_name = "my_target_repo"
branch_name = "main"
poll_for_sourcechanges = true
output_artifacts = ["CodeWorkspace"]
ouput_format = var.cp_ouput_format
}]
},
{
name = "part 2"
action = [{
run_order = 1
name = "Combine_Binaries"
owner = "AWS"
version = "1"
category = "Build"
provider = "CodeBuild"
namespace = "BIN"
input_artifacts = ["CodeWorkspace"]
output_artifacts = ["CodeSource"]
ProjectName = "test_runner"
}]
}]
}
variables files associated with the run book:
variables.tf
#---------------------------------------------------------------------------------------------------
# CODEPIPELINE VARIABLES
#---------------------------------------------------------------------------------------------------
variable "cp_branch_name" {
type = string
description = "The branch of the repo that will be watched and used to trigger deployment"
default = "development"
}
variable "cp_poll_sources" {
description = "Trigger that lets codepipeline know that it needs to trigger build on change"
type = bool
default = false
}
variable "cp_ouput_format" {
type = string
description = "Output artifacts format that is used to save the outputs. Values can be CODEBUILD_CLONE_REF or CODE_ZIP"
default = "CODE_ZIP"
}
variable "backend_bucket_target_name" {
type = string
description = "The folder name where the state file is stored for the pipeline"
default = "dynamic-test-pl"
}
variable "repo" {
type = string
description = "name of the repo the pipeine is managing"
default = "my_target_repo"
}
I know this is my first time trying this. Not very good with Lists and maps on terraform, but I am certain it has to do with the way i am passing it in. Any help or guidance would be appreciated it.
After some time, I finally found the answer to this issue. Special thanks to this thread on github. It put me in the right direction. A couple of things to take away from this. Variable declaration is the essential part of Dynamic Pipeline. I worked with several working examples that yielded great results for Stages and Actions, but when it came to the configuration environment variables, they all crashed. The root problem that I concluded was that you could not perform Dynamic Actions with Environment variables and hope for terraform to perform the JSON translation for you. In some cases it would work but required that every Action contain similar elements which led to character constraints and errors like my post called out.
My best guess is that terraform has a hard limit on variables and their character limits. The solution, declare the resource as a dynamic that seems to support different limits versus traditional variables within a resource. The approach that was taken makes the entire Terraform Resource a Dynamic attribute which I feel is treated differently by terraform in its entirety with fewer limits (an assumption). I say that because I tried four methods of dynamic staging and actions. Those methods worked up until I introduced the environment variables (forces JSON conversion on a specific resource type) and then I would get various errors all pointing at either a variable not supported for missing attributes or a variable that exceeds terraform character limits.
What worked was creating the entire resource as a dynamic resource which I could pass in as a map attribute that includes the EnvironmentVariables. See examples below.
Final Dynamic Pipeline
resource "aws_codepipeline" "codepipeline" {
for_each = var.code_pipeline
name = "${local.name_prefix}-${var.AppName}"
role_arn = each.value["code_pipeline_role_arn"]
tags = {
Pipeline_Key = each.key
}
artifact_store {
type = lookup(each.value, "artifact_store", null) == null ? "" : lookup(each.value.artifact_store, "type", "S3")
location = lookup(each.value, "artifact_store", null) == null ? null : lookup(each.value.artifact_store, "artifact_bucket", null)
}
dynamic "stage" {
for_each = lookup(each.value, "stages", {})
iterator = stage
content {
name = lookup(stage.value, "name")
dynamic "action" {
for_each = lookup(stage.value, "actions", {}) //[stage.key]
iterator = action
content {
name = action.value["name"]
category = action.value["category"]
owner = action.value["owner"]
provider = action.value["provider"]
version = action.value["version"]
run_order = action.value["run_order"]
input_artifacts = lookup(action.value, "input_artifacts", null)
output_artifacts = lookup(action.value, "output_artifacts", null)
configuration = action.value["configuration"]
namespace = lookup(action.value, "namespace", null)
}
}
}
}
}
Calling Dynamic Pipeline
module "code_pipeline" {
source = "../module-aws-codepipeline" #using module locally
#source = "your-github-repository/aws-codepipeline" #using github repository
AppName = "My_new_pipeline"
code_pipeline = local.code_pipeline
}
SAMPlE local pipeline variable
locals {
/*
DECLARE enviornment variables. Note each Action does not require environment variables
*/
action_second_stage_variables = [
{
name = "PIPELINE_EXECUTION_ID"
type = "PLAINTEXT"
value = "#{codepipeline.PipelineExecutionId}"
},
{
name = "NamespaceVariable"
type = "PLAINTEXT"
value = "some_value"
},
]
action_third_stage_variables = [
{
name = "PL_VARIABLE_1"
type = "PLAINTEXT"
value = "VALUE1"
},
{
name = "PL_VARIABLE 2"
type = "PLAINTEXT"
value = "VALUE2"
},
{
name = "PL_VARIABLE_3"
type = "PLAINTEXT"
value = "VAUE3"
},
{
name = "PL_VARIABLE_4"
type = "PLAINTEXT"
value = "#{BLD.NamespaceVariable}"
},
]
/*
BUILD YOUR STAGES
*/
code_pipeline = {
codepipeline-configs = {
code_pipeline_role_arn = "arn:aws:iam::aws_account_name:role/role_name"
artifact_store = {
type = "S3"
artifact_bucket = "your-aws-bucket-name"
}
stages = {
stage_1 = {
name = "Download"
actions = {
action_1 = {
run_order = 1
category = "Source"
name = "First_Stage"
owner = "AWS"
provider = "CodeCommit"
version = "1"
output_artifacts = ["download_ouput"]
configuration = {
RepositoryName = "Codecommit_target_repo"
BranchName = "main"
PollForSourceChanges = true
OutputArtifactFormat = "CODE_ZIP"
}
}
}
}
stage_2 = {
name = "Build"
actions = {
action_1 = {
run_order = 2
category = "Build"
name = "Second_Stage"
owner = "AWS"
provider = "CodeBuild"
version = "1"
namespace = "BLD"
input_artifacts = ["Download_ouput"]
output_artifacts = ["build_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_second_stage"
EnvironmentVariables = jsonencode(local.action_second_stage_variables)
}
}
}
}
stage_3 = {
name = "Validation"
actions = {
action_1 = {
run_order = 1
name = "Third_Stage"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["build_outputs"]
output_artifacts = ["validation_outputs"]
configuration = {
ProjectName = "codebuild_project_name_for_third_stage"
EnvironmentVariables = jsonencode(local.action_third_stage_variables)
}
}
}
}
}
}
}
}
The trick becomes building your code pipeline resource and its stages and actions at the local level. You take your local.tf and build out the pipeline variable there, you build out all your stages, actions, and EnvironmentVariables. EnvironmentVariables are then passed and converted from JSON directly into the variable, which passes in as a single variable type. A sample explaining this approach can be found within this GitHub repository. I took the findings and consolidated them, and documented them so others could leverage this method.
I'm trying to create multiple AWS Accounts in an Organization containing ressources.
The resources should owned by the created accounts.
for that I created a module for the accounts:
resource "aws_organizations_account" "this" {
name = var.customer
email = var.email
parent_id = var.parent_id
role_name = "OrganizationAccountAccessRole"
provider = aws.src
}
resource "aws_s3_bucket" "this" {
bucket = "exconcept-terraform-state-${var.customer}"
provider = aws.dst
depends_on = [
aws_organizations_account.this
]
}
output "account_id" {
value = aws_organizations_account.this.id
}
output "account_arn" {
value = aws_organizations_account.this.arn
}
my provider file for the module:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
configuration_aliases = [ aws.src, aws.dst ]
}
}
}
In the root module I'm calling the module like this:
module "account" {
source = "./modules/account"
for_each = var.accounts
customer = each.value["customer"]
email = each.value["email"]
# close_on_deletion = true
parent_id = aws_organizations_organizational_unit.testing.id
providers = {
aws.src = aws.default
aws.dst = aws.customer
}
}
Since the provider information comes from the root module, and the accounts are created with a for_each map, how can I use the current aws.dst provider?
Here is my root provider file:
provider "aws" {
region = "eu-central-1"
profile = "default"
alias = "default"
}
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::${module.account[each.key].account_id}:role/OrganizationAccountAccessRole"
}
alias = "customer"
region = "eu-central-1"
}
With Terraform init I got this error:
Error: Cycle: module.account.aws_s3_bucket_versioning.this, module.account.aws_s3_bucket.this, provider["registry.terraform.io/hashicorp/aws"].customer, module.account.aws_s3_bucket_acl.this, module.account (close)
I am creating an AWS CodePipeline resource with terraform:
resource "aws_codepipeline" "codepipeline" {
name = "codepipeline-tst"
role_arn = "${aws_iam_role.codepipeline_role.arn}"
artifact_store {
location = "codepipeline-eu-east-1-<ACC_NUM>"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["artifact"]
configuration = {
Owner = "MyOwner"
Repo = "MyRepo"
Branch = "master"
OAuthToken = ""
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeploy"
input_artifacts = ["artifact"]
version = "1"
}
}
}
When running terraform apply, after 2min of aws_codepipeline.codepipeline: Still creating. it returns
Error: Error creating CodePipeline: ValidationException: 1 validation error detected: Value at 'pipeline.stages.1.member.actions.1.member.configuration' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
Edit:
The new deploy stage is:
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeploy"
input_artifacts = ["artifact"]
version = "1"
configuration = {
ApplicationName = "my-app"
DeploymentGroupName = "bar"
}
}
}
I have this app created using:
resource "aws_codedeploy_app" "my-app" {
compute_platform = "Server"
name = "my-app"
}
And the group using:
resource "aws_codedeploy_deployment_group" "my_group-2" {
app_name = "my-app"
deployment_group_name = "bar"
service_role_arn = "arn..."
ec2_tag_filter {
key = "aws:autoscaling:groupName"
type = "KEY_AND_VALUE"
value = "MyContainerService"
}
auto_rollback_configuration {
enabled = false
}
}
Your "Deploy" action has not 'configuration' properties which is required.
CodeDeploy action requires two configuration parameter:
Application name
Deployment group
Please add them to this Action.
When I execute the command terraform apply it throws the Error. Every configurations and connectivity from my local machine works fine. I checked using the below command.
gcloud auth login and
Verified using gcloud config list.
I have attached the main and variable tf files below
resource "google_project" "my_project" {
name = "${var.project_name}"
project_id = "${var.project_id}"
}
provider "google" {
project = "${var.project_id}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_project_services" "project" {
project = "${var.project_id}"
services = ["composer.googleapis.com", "iam.googleapis.com", "cloudresourcemanager.googleapis.com"]
}
resource "google_pubsub_topic" "topic" {
name = "${var.topic}"
project = "${var.project_id}"
}
resource "google_pubsub_subscription" "push_subscriptions" {
count = "${length(var.push_subscriptions)}"
name = "${lookup(var.push_subscriptions[count.index], "name")}"
topic = "${google_pubsub_topic.topic.name}"
project = "${var.project_id}"
depends_on = ["google_pubsub_topic.topic"]
}
resource "google_pubsub_subscription" "pull_subscriptions" {
count = "${length(var.pull_subscriptions)}"
name = "${lookup(var.pull_subscriptions[count.index], "name")}"
topic = "${google_pubsub_topic.topic.name}"
project = "${var.project_id}"
depends_on = ["google_pubsub_topic.topic"]
}
Variables.tf file
variable "project_id" {
default = "media-intelligence-244309"
}
variable "region" {
default = "asia-south1"
}
variable "zone" {
default = "asia-south1-b"
}
variable "project_name" {
default = "media-intelligence"
}
variable "topic" {
default = "topic-1"
}
variable "push_subscriptions" {
type = "list"
default = []
}
variable "pull_subscriptions" {
type = "list"
default = []
}
Error: Error applying plan:
3 error(s) occurred:
google_project_services.project: 1 error(s) occurred:
google_project_services.project: Error authoritatively enabling Project media-244309 Services: Get https://cloudresourcemanager.googleapis.com/v1/projects/media-244309?alt=json&prettyPrint=false: oauth2: cannot fetch token: 400 Bad Request
Response: {
"error": "invalid_grant",
"error_description": "Robot is disabled."
}