Terraform Codepipeline Deploy in Different Region - amazon-web-services

I'm trying to deploy my service in the region that is just newly available (Jakarta). But it looks like the Codepipeline is not available so I have to create the Codepipeline in the nearest region (Singapore) and deploy it to Jakarta region. It is also my first time setting up Codepipeline in Terraform, so I'm not sure if I do it right or not.
P.S. The default region of all these infrastructures is in "Jakarta" region. I will exclude the deploy part since the issue is showing up without it.
resource "aws_codepipeline" "pipeline" {
name = local.service_name
role_arn = var.codepipeline_role_arn
artifact_store {
type = "S3"
region = var.codepipeline_region
location = var.codepipeline_artifact_bucket_name
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeStarSourceConnection"
version = "1"
output_artifacts = ["SourceArtifact"]
region = var.codepipeline_region
configuration = {
ConnectionArn = var.codestar_connection
FullRepositoryId = "${var.team_name}/${local.repo_name}"
BranchName = local.repo_branch
OutputArtifactFormat = "CODEBUILD_CLONE_REF" // NOTE: Full clone
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["SourceArtifact"]
output_artifacts = ["BuildArtifact"]
run_order = 1
region = var.codepipeline_region
configuration = {
"ProjectName" = local.service_name
}
}
}
tags = {
Name = "${local.service_name}-pipeline"
Environment = local.env
}
}
Above is the Terraform configuration that I created, but it gives me an error like this:
│ Error: region cannot be set for a single-region CodePipeline
If I try to remove the region on the root block, the Terraform will try to access the default region which is Jakarta region (and it will fail since Codepipeline is not available in Jakarta).
│ Error: Error creating CodePipeline: RequestError: send request failed
│ caused by: Post "https://codepipeline.ap-southeast-3.amazonaws.com/": dial tcp: lookup codepipeline.ap-southeast-3.amazonaws.com on 103.86.96.100:53: no such host

You need to setup alias provider with different region. For exmaple:
provider "aws" {
alias = "singapore"
region = "ap-southeast-1"
}
Then you deploy your pipeline to that region using the alias:
resource "aws_codepipeline" "pipeline" {
provider = aws.singapore
name = local.service_name
role_arn = var.codepipeline_role_arn
# ...
}

Related

Github Actions is unable to create resources in AWS mentioned in a Terraform module

terraform plan shows correct result when run locally but does not create resource mentioned in module when run on GitHub actions. The other resources in root main.tf (s3) are created fine.
Root project:-
terraform {
backend "s3" {
bucket = "sd-tfstorage"
key = "terraform/backend"
region = "us-east-1"
}
}
locals {
env_name = "sandbox"
aws_region = "us-east-1"
k8s_cluster_name = "ms-cluster"
}
# Network Configuration
module "aws-network" {
source = "github.com/<name>/module-aws-network"
env_name = local.env_name
vpc_name = "msur-VPC"
cluster_name = local.k8s_cluster_name
aws_region = local.aws_region
main_vpc_cidr = "10.10.0.0/16"
public_subnet_a_cidr = "10.10.0.0/18"
public_subnet_b_cidr = "10.10.64.0/18"
private_subnet_a_cidr = "10.10.128.0/18"
private_subnet_b_cidr = "10.10.192.0/18"
}
# EKS Configuration
# GitOps Configuration
module:-
provider "aws" {
region = var.aws_region
}
locals {
vpc_name = "${var.env_name} ${var.vpc_name}"
cluster_name = "${var.cluster_name}-${var.env_name}"
}
## AWS VPC definition
resource "aws_vpc" "main" {
cidr_block = var.main_vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = local.vpc_name,
"kubernetes.io/cluster/${local.cluster_name}" = "shared",
}
}
When you run it locally, you are using the default aws profile to plan it.
Have you set up your github environment with the correct aws access to perform those actions?

google beta permissions not found terraform

I'm trying to create a reserved subnet for regional load balancer. It is the first time i'm using google-beta provider and when i try to create the subnet using the following script...:
resource "google_compute_subnetwork" "proxy-subnet" {
provider = google-beta
project = "proyecto-pegachucho"
name = "website-net-proxy"
ip_cidr_range = "10.10.50.0/24"
region = "us-central1"
network = google_compute_network.HSBC_project_network.self_link
purpose = "INTERNAL_HTTPS_LOAD_BALANCER"
role = "ACTIVE"
}
... this error appears:
Error: Error creating Subnetwork: googleapi: Error 403: Required 'compute.subnetworks.create' permission for 'projects/proyecto-pegachucho/regions/us-central1/subnetworks/website-net-proxy'
More details:
Reason: forbidden, Message: Required 'compute.subnetworks.create' permission for 'projects/proyecto-pegachucho/regions/us-central1/subnetworks/website-net-proxy'
Reason: forbidden, Message: Required 'compute.networks.updatePolicy' permission for 'projects/proyecto-pegachucho/global/networks/hsbc-vpc-project'
on .terraform\modules\networking\networking.tf line 18, in resource "google_compute_subnetwork" "proxy-subnet":
18: resource "google_compute_subnetwork" "proxy-subnet" {
It doesn't make any sense because i have the owner role in my service account and that permissions are enabled. What could I do?
EDIT: I resolved it adding the provider directly in the modules like this:
provider "google-beta" {
project = var.project
region = var.region
credentials = "./mario.json"
}
resource "google_compute_health_check" "lb-health-check-global" {
name = var.healthckeck_name
check_interval_sec = var.check_interval_sec
timeout_sec = var.timeout_sec
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold # 50 seconds
tcp_health_check {
port = var.healthckeck_port
}
}
resource "google_compute_region_health_check" "lb-health-check-regional" {
provider = google-beta
region = var.region
project = var.project
name = "healthcheck-regional"
check_interval_sec = var.check_interval_sec
timeout_sec = var.timeout_sec
healthy_threshold = var.healthy_threshold
unhealthy_threshold = var.unhealthy_threshold # 50 seconds
tcp_health_check {
port = var.healthckeck_port
}
}
I resolved this using the provider lines inside of the terraform module instead the main module (also you can configure two providers):
provider "google-beta" {
project = var.project
region = var.region
credentials = var.credentials
}

Terraform - Multiple accounts with multiple environments (regions)

I am developing the infrastructure (IaC) I want to have in AWS with Terraform. To test, I am using an EC2 instance.
This code has to be able to be deployed across multiple accounts and **multiple regions (environments) per developer **. This is an example:
account-999
developer1: us-east-2
developer2: us-west-1
developerN: us-east-1
account-666:
Staging: us-east-1
Production: eu-west-2
I've created two .tfvars variables, account-999.env.tfvars and account-666.env.tfvars with the following content:
profile="account-999" and profile="account-666" respectively
This is my main.tf which contains the aws provider with the EC2 instance:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
profile = var.profile
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
And the variable.tf file:
variable "profile" {
type=string
}
variable "region" {
description = "Region by developer"
type = map
default = {
developer1 = "us-west-2"
developer2 = "us-east-2"
developerN = "ap-southeast-1"
}
}
But I'm not sure if I'm managing it well. For example, the region variable only contains the values of the account-999 account. How can I solve that?
On the other hand, with this structure, would it be possible to implement modules?
You could use a provider alias to accomplish this. More info about provider aliases can be found here.
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_instance" "foo" {
provider = aws.west
# ...
}
Another way to look at is, is by using terraform workspaces. Here is an example:
terraform workspace new account-999
terraform workspace new account-666
Then this is an example of your aws credentials file:
[account-999]
aws_access_key_id=xxx
aws_secret_access_key=xxx
[account-666]
aws_access_key_id=xxx
aws_secret_access_key=xxx
A reference to that account can be used within the provider block:
provider "aws" {
region = "us-east-1"
profile = "${terraform.workspace}"
}
You could even combine both methods!

Terraform scripted AWS CodePipeline fails on Deploy stage with InternalError

I'm attempting to use AWS CodePipeline to deploy an app to an EC2 instance using CodeDeploy agent, but it's failing with this frustratingly vague
"InternalError":
I can't find any other meaningful error.
I'm using terraform to define the CodePipeline. This is the "Deploy" section:
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeploy"
input_artifacts = ["buildOut"]
run_order = 1
version = "1"
configuration = {
ApplicationName = aws_codedeploy_app.my-codedeploy-app.id
DeploymentGroupName = aws_codedeploy_deployment_group.my-codedeploy-group.id
}
}
}
What am I doing wrong?
There are two small problems with your deployment definition.
ApplicationName should reference app.name, not app.id
DeploymentGroupName should reference deployment_group_name, not group.id
Try this:
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeploy"
input_artifacts = ["buildOut"]
run_order = 1
version = "1"
configuration = {
ApplicationName = aws_codedeploy_app.my-codedeploy-app.name // This should be name, not id
DeploymentGroupName = aws_codedeploy_deployment_group.my-codedeploy-group.deployment_group_name // this should be deployment_group_name, not id
}
}
}

Terraform multiple region aws_ses_domain_identity

I want to create a aws_ses_domain_identity resource in multiple regions but as far as I can see, this is only possible by changing the region of the AWS provider.
I've attempted to use a for_each with no luck. I then want to create a aws_route53_record from the verification_tokens. I suspect this also won't work.
Ultimately, I'm aiming for creating an SES domain identity and corresponding Route 53 verification records for the regions specified in a variable (ses_regions).
Code:
provider "aws" {
alias = "eu-central-1"
region = "eu-central-1"
}
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
variable "ses_regions" {
description = "The aws region in which to operate"
default = {
region1 = "us-west-2"
region2 = "eu-central-1"
}
}
resource "aws_ses_domain_identity" "example" {
for_each = var.ses_regions
provider = each.value
domain = var.ses_domain
}
resource "aws_route53_record" "example_amazonses_verification_record" {
for_each = aws_ses_domain_identity.example.verification_token
zone_id = var.zone_id
name = "_amazonses.${var.ses_domain}"
type = "TXT"
ttl = "600"
records = each.value
}
Error:
Error: Invalid provider configuration reference
on .terraform/modules/ses/main.tf line 8, in resource "aws_ses_domain_identity" "example":
8: provider = aws.each.value
The provider argument requires a provider type name, optionally followed by a
period and then a configuration alias.