I am creating a couple of resources using terraform i.e. S3, CodeDeploy and ECS. I am creating my S3 bucket and uploading an appspec.yml file in it.
This is what my appspec.yml looks like :-
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "Hardcoded-ARN"
LoadBalancerInfo:
ContainerName: "new-nginx-app"
ContainerPort: 80
And this is my ECS module :-
resource "aws_ecs_cluster" "foo" {
name = "white-hart"
}
resource "aws_ecs_task_definition" "test" {
family = "white-hart"
container_definitions = file("${path.module}/definition.json")
requires_compatibilities = toset(["FARGATE"])
memory = 1024
cpu = 256
network_mode = "awsvpc"
execution_role_arn = aws_iam_role.white-hart-role.arn
runtime_platform {
operating_system_family = "LINUX"
}
}
Basically what i am trying to do is to somehow pass the aws_ecs_task_definition.arn to my appspec.yml file so i do not have to hardcode it. Is there a way to achieve it without the use of build tools?
There is a way, by using the built-in templatefile [1] function. In order to achieve that, you can do a couple of things, but if used with an existing S3 bucket, you should do the following:
resource "aws_s3_object" "appspec_object" {
bucket = <your s3 bucket name>
key = "appspec.yaml"
acl = "private"
content = templatefile("${path.module}/appspec.yaml.tpl", {
task_definition_arn = aws_ecs_task_definition.test.arn
})
tags = {
UseWithCodeDeploy = true
}
}
Next, you should convert your current appspec.yml file to a template file (called appspec.yaml.tpl):
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "${task_definition_arn}"
LoadBalancerInfo:
ContainerName: "new-nginx-app"
ContainerPort: 80
Even more, you could replace all the hardcoded values in the template with variables and reuse it, e.g.:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "${task_definition_arn}"
LoadBalancerInfo:
ContainerName: "${container_name}"
ContainerPort: "${container_port}"
In that case, the S3 object resource would be:
resource "aws_s3_object" "appspec_object" {
bucket = <your s3 bucket name>
key = "appspec.yaml"
acl = "private"
content = templatefile("${path.module}/appspec.yaml.tpl", {
task_definition_arn = aws_ecs_task_definition.test.arn
container_name = "new-nginx-app"
container_port = 80
})
tags = {
UseWithCodeDeploy = true
}
}
The placeholder values in the template file will be replaced with values provided when calling the templatefile function.
[1] https://www.terraform.io/language/functions/templatefile
Related
I'm trying to setup monitoring for my ECS services. The idea is to add the public.ecr.aws/aws-observability/aws-otel-collector:latest as a second container in each ECS task, and configure it such that it scapes the prometheus endpoint of the application, and then writes it the Amazon Managed Service for Prometheus. I want to add labels to all the metrics to see which ECS service and task the metrics are from. Ideally, to re-use existing grafana dashboards, I want the labels to be named job and instance for the service 'family' name, and the task id respectively.
I'm using terraform for configuration. The task definition looks like:
resource "aws_ecs_task_definition" "task" {
family = var.name
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = var.task_cpu
memory = var.task_memory
execution_role_arn = aws_iam_role.task_execution.arn
task_role_arn = aws_iam_role.task_role.arn
runtime_platform {
cpu_architecture = "ARM64"
}
container_definitions = jsonencode([
{
name = "app"
image = "quay.io/prometheus/node-exporter:latest"
cpu = var.task_cpu - 256
memory = var.task_memory - 512
essential = true
mountPoints = []
volumesFrom = []
portMappings = [{
protocol = "tcp"
containerPort = 8080
hostPort = 8080
}]
command = ["--web.listen-address=:8080"]
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.task.name
awslogs-region = data.aws_region.current.name
awslogs-stream-prefix = "ecs"
}
}
},
{
name = "otel-collector"
image = "public.ecr.aws/aws-observability/aws-otel-collector:latest"
cpu = 256
memory = 512
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.otel.name
awslogs-region = data.aws_region.current.name
awslogs-stream-prefix = "ecs"
}
}
environment = [
{
name = "AOT_CONFIG_CONTENT",
value = local.adot_config
}
]
}
])
}
And the open telemetry config i'm using looks like:
extensions:
sigv4auth:
service: "aps"
region: ${yamlencode(region)}
receivers:
prometheus:
config:
global:
scrape_interval: 15s
scrape_timeout: 10s
scrape_configs:
- job_name: "app"
static_configs:
- targets: [0.0.0.0:8080]
processors:
resourcedetection/ecs:
detectors: [env, ecs]
timeout: 2s
override: false
metricstransform:
transforms:
- include: ".*"
match_type: regexp
action: update
operations:
- action: update_label
label: aws.ecs.task.arn
new_label: instance_foo
- action: add_label
new_label: foobar
new_value: some value
exporters:
prometheusremotewrite:
endpoint: ${yamlencode("${endpoint}api/v1/remote_write")}
auth:
authenticator: sigv4auth
resource_to_telemetry_conversion:
enabled: true
service:
extensions: [sigv4auth]
pipelines:
metrics:
receivers: [prometheus]
processors: [resourcedetection/ecs, metricstransform]
exporters: [prometheusremotewrite]
However the foobar label is added to all metrics, but the instance_foo label is not added with the aws.ecs.task.arn value. In grafana the labels of resourcedetection are visible, but not the instance_foo label.
I did try to debug the open-telemetry application locally, and noticed the resourcedetection labels are not yet available in the metricstransform.
So is it possible to rename labels using the metricstransform that are provided by resourcedetection, or are there other ways to set this up?
Getting a returned error when trying to create a AWS Systems Manager automation document via aws_ssm_document Terraform resource.
Error: creating SSM document: InvalidDocumentContent: YAML not well-formed. at Line: 1, Column: 1
Test for sanity to create the YAML automation document manually using the same document and also to import it inline (which is less than ideal due to the size)
Sample below of the Terraform resource and the YAML document.
resource "aws_ssm_document" "rhel_updates" {
name = "TEST-DW"
document_format = "YAML"
content = "YAML"
document_type = "Automation"
attachments_source {
key = "SourceUrl"
values = ["s3://rhel/templates/101/runbooks/test.yaml"]
name = "test.yaml"
}
}
schemaVersion: '0.3'
description: |-
cloud.support#test.co.uk
parameters:
S3ArtifactStore:
type: String
default: rhel01
description: S3 Artifact Store.
ApiInfrastructureStackName:
type: String
description: API InfrastructureStackName.
default: rhel-api
mainSteps:
- name: getApiInfrastructureStackOutputs
action: 'aws:executeAwsApi'
outputs:
- Selector: '$.Stacks[0].Outputs'
Name: Outputs
Type: MapList
inputs:
Service: cloudformation
Api: DescribeStacks
StackName: '{{ApiInfrastructureStackName}}'
Buildspec.yaml
version: 0.2
files:
- source: /
destination: /folder-test
phases:
install:
commands:
- apt-get update
- apt install jq
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1 --no-include-email | sed 's|https://||')
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
build:
commands:
- echo Pulling docker image
- docker pull 309005414223.dkr.ecr.eu-west-1.amazonaws.com/my-task-webserver-repository:latest
- echo Running the Docker image...
- docker run -d=true 309005414223.dkr.ecr.eu-west-1.amazonaws.com/my-task-webserver-repository:latest
post_build:
commands:
- aws ecs describe-task-definition --task-definition my-task-task-definition | jq '.taskDefinition' > taskdef.json
artifacts:
files:
- appspec.yaml
- taskdef.json
Appspec.yml
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "arn:XXX/YYY"
LoadBalancerInfo:
ContainerName: "My-name"
ContainerPort: "8080"
NetworkConfiguration:
AwsvpcConfiguration:
Subnets: ["subnet-1","subnet-2","subnet-3"]
SecurityGroups: ["sg-1","sg-2","sg-3"]
AssignPublicIp: "DISABLED"
Terraform resource (codepipeline)
resource "aws_codepipeline" "codepipeline" {
name = "${var.namespace}-stage"
role_arn = aws_iam_role.role.arn
artifact_store {
location = aws_s3_bucket.bucket.bucket
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["my-source"]
configuration = {
OAuthToken = "UUUU"
Owner = var.owner
Repo = var.repo
Branch = var.branch
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["my-source"]
output_artifacts = ["my-build"]
configuration = {
ProjectName = my-project
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeployToECS"
input_artifacts = ["my-build"]
version = "1"
configuration = {
ApplicationName = app_name
DeploymentGroupName = group_name
TaskDefinitionTemplateArtifact = "my-build"
AppSpecTemplateArtifact = "my-build"
}
}
}
}
Codebuild
resource "aws_codebuild_project" "codebuild" {
name = my-project
description = "Builds for my-project"
build_timeout = "15"
service_role = aws_iam_role.role.arn
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:2.0"
type = "LINUX_CONTAINER"
privileged_mode = true
}
cache {
type = "LOCAL"
modes = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
}
source {
type = "CODEPIPELINE"
}
vpc_config {
security_group_ids = var.sg_ids
subnets = ["subnet-1","subnet-2","subnet-3"]
vpc_id = "vpc-1"
}
}
Everything works well in codepipeline. Task is created, and trafic redirect. No log showing any issue. Just when connect through ssh to the server. The folder folder-test exists but no content there except child folders. Files are not there.
I tried removing the folder in console, and redeploying a new push, and the same result.
According to the AWS specification for buildspec.yml your file does not conform to its specification.
Namely, there is no such section in the buildspec.yml like yours:
files:
- source: /
destination: /folder-test
This could explain why the file/folder is not what you expect it to be.
I'm fighting with wired case.
I need to push cloudformation stacks dynamically parameterized with terraform.
My resource looks like this.
resource "aws_cloudformation_stack" "eks-single-az" {
count = length(var.single_az_node_groups)
name = "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
template_body = <<EOF
Description: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
Resources:
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
VPCZoneIdentifier: ["${var.private_subnet_ids[count.index]}"]
MinSize: "${lookup(var.single_az_node_groups[count.index], "asg_min", "0")}"
MaxSize: "${lookup(var.single_az_node_groups[count.index], "asg_max", "10")}"
HealthCheckType: EC2
TargetGroupARNs: [] < - here is error.
MixedInstancesPolicy:
InstancesDistribution:
OnDemandBaseCapacity: "0"
OnDemandPercentageAboveBaseCapacity: "${lookup(var.single_az_node_groups[count.index], "on_demand_percentage", "0")}"
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: "${aws_launch_template.eks-single-az[count.index].id}"
Version: "${aws_launch_template.eks-single-az[count.index].latest_version}"
Overrides:
-
InstanceType: m5.large
Tags:
- Key: "Name"
Value: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
PropagateAtLaunch: true
- Key: "kubernetes.io/cluster/${var.cluster_name}"
Value: "owned"
PropagateAtLaunch: true
- Key: "k8s.io/cluster-autoscaler/enabled"
Value: "true"
PropagateAtLaunch: true
- Key: "k8s.io/cluster-autoscaler/${var.cluster_name}"
Value: "true"
PropagateAtLaunch: true
UpdatePolicy:
AutoScalingRollingUpdate:
MinSuccessfulInstancesPercent: 80
MinInstancesInService: "${lookup(data.external.desired_capacity.result, "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}", "0")}"
PauseTime: PT4M
SuspendProcesses:
- HealthCheck
- ReplaceUnhealthy
- AZRebalance
- AlarmNotification
- ScheduledActions
WaitOnResourceSignals: true
EOF
depends_on = [
aws_launch_template.eks-single-az
]
}
I need to put target groups arn from list containing json objects:
single_az_node_groups = [
{
"name" : "workload-az1",
"instance_type" : "t2.micro",
"asg_min" : "1",
"asg_max" : "7",
"target_group_arns" : "arnA, arnB, arnC"
},
...
]
I tried everything. Problem is that i tried many terraform functions and all the time terraform is addding some double-quotes which cloudformation does not support or terraform won't process the template_body becuase of missing quotes..
Do you know meybe some sneaky trick how to achive that ?
When building strings that represent serialized data structures, it's much easier to use Terraform's built-in serialization functions to construct the result, rather than trying to produce a valid string using string templates.
In this case, we can use jsonencode to construct a JSON string representing the template_body from a Terraform object value, which then allows using all of the Terraform language expression features to build it:
template_body = jsonencode({
Description: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}",
Resources: {
ASG: {
Type: "AWS::AutoScaling::AutoScalingGroup",
Properties: {
AutoScalingGroupName: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}",
VPCZoneIdentifier: [var.private_subnet_ids[count.index]],
MinSize: lookup(var.single_az_node_groups[count.index], "asg_min", "0"),
MaxSize: lookup(var.single_az_node_groups[count.index], "asg_max", "10"),
HealthCheckType: "EC2",
TargetGroupArns: flatten([
for g in local.single_az_node_groups : [
split(", ", g.target_group_arns)
]
]),
# etc, etc
},
},
},
})
As you can see above, by using jsonencode for the entire data structure we can then use Terraform expression operators to build the values. For TargetGroupArns in the above example I used the flatten function along with a for expression to transform the nested local.single_az_node_groups data structure into a flat list of target group ARN strings.
CloudFormation supports both JSON and YAML, and Terraform also has a yamlencode function that you could potentially use instead of jsonencode here. I chose jsonencode both because yamlencode is currently marked as experimental (the exact YAML formatting it produces may change in a later release) and because Terraform has special support for JSON formatting in the plan output where it can show a structural diff of the data structure inside, rather than a string-based diff.
Terraform provides a WAF Web ACL Resource. Can it be attached to anything using terraform such as an ALB or is it useless?
With the release of the 1.12 AWS provider it is now possible to directly create regional WAF resources for use with load balancers.
You can now create any of a aws_wafregional_byte_match_set, aws_wafregional_ipset, aws_wafregional_size_constraint_set, aws_wafregional_sql_injection_match_set or aws_wafregional_xss_match_set, linking these to aws_wafregional_rule as predicates and then in turn adding the WAF rules to a aws_wafregional_web_acl. Then finally you can attach the regional WAF to a load balancer with the aws_wafregional_web_acl_association resource.
The Regional WAF Web ACL association resource docs give a helpful example of how they all link together:
resource "aws_wafregional_ipset" "ipset" {
name = "tfIPSet"
ip_set_descriptor {
type = "IPV4"
value = "192.0.7.0/24"
}
}
resource "aws_wafregional_rule" "foo" {
name = "tfWAFRule"
metric_name = "tfWAFRule"
predicate {
data_id = "${aws_wafregional_ipset.ipset.id}"
negated = false
type = "IPMatch"
}
}
resource "aws_wafregional_web_acl" "foo" {
name = "foo"
metric_name = "foo"
default_action {
type = "ALLOW"
}
rule {
action {
type = "BLOCK"
}
priority = 1
rule_id = "${aws_wafregional_rule.foo.id}"
}
}
resource "aws_vpc" "foo" {
cidr_block = "10.1.0.0/16"
}
data "aws_availability_zones" "available" {}
resource "aws_subnet" "foo" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "10.1.1.0/24"
availability_zone = "${data.aws_availability_zones.available.names[0]}"
}
resource "aws_subnet" "bar" {
vpc_id = "${aws_vpc.foo.id}"
cidr_block = "10.1.2.0/24"
availability_zone = "${data.aws_availability_zones.available.names[1]}"
}
resource "aws_alb" "foo" {
internal = true
subnets = ["${aws_subnet.foo.id}", "${aws_subnet.bar.id}"]
}
resource "aws_wafregional_web_acl_association" "foo" {
resource_arn = "${aws_alb.foo.arn}"
web_acl_id = "${aws_wafregional_web_acl.foo.id}"
}
Original post:
The regional WAF resources have been caught up in a mixture of review and people abandoning pull requests but are scheduled for the AWS provider 1.12.0 release.
Currently there are only byte match set and IP address set resources available so they're not much use without the rule, ACL and association resources to actually do things with.
Until then you could use CloudFormation with Terraform's own escape hatch aws_cloudformation_stack resource with something like this:
resource "aws_lb" "load_balancer" {
...
}
resource "aws_cloudformation_stack" "waf" {
name = "waf-example"
parameters {
ALBArn = "${aws_lb.load_balancer.arn}"
}
template_body = <<STACK
Parameters:
ALBArn:
Type: String
Resources:
WAF:
Type: AWS::WAFRegional::WebACL
Properties:
Name: WAF-Example
DefaultAction:
Type: BLOCK
MetricName: WafExample
Rules:
- Action:
Type: ALLOW
Priority: 2
RuleId:
Ref: WhitelistRule
WhitelistRule:
Type: AWS::WAFRegional::Rule
Properties:
Name: WAF-Example-Whitelist
MetricName: WafExampleWhiteList
Predicates:
- DataId:
Ref: ExternalAPIURI
Negated: false
Type: ByteMatch
ExternalAPIURI:
Type: AWS::WAFRegional::ByteMatchSet
Properties:
Name: WAF-Example-StringMatch
ByteMatchTuples:
- FieldToMatch:
Type: URI
PositionalConstraint: STARTS_WITH
TargetString: /public/
TextTransformation: NONE
WAFALBattachment:
Type: AWS::WAFRegional::WebACLAssociation
Properties:
ResourceArn:
Ref: ALBArn
WebACLId:
Ref: WAF
STACK
}