Completely private certificate with AWS in Terraform? - amazon-web-services

I would like to create a certificate signed by AWS for use by internal services. The internal services are only visible inside my VPC. I don't want anything about the internal services, such as the subdomain, to leak externally.
This is the bit of Terraform I am unsure about:
resource "aws_acm_certificate" "internal" {
domain_name = "*.internal.example.org."
# What goes here?
}
The internal service consumes the certificate like this:
resource "aws_elastic_beanstalk_environment" "foo" {
name = "foo-env"
# ...
setting {
namespace = "aws:ec2:vpc"
name = "ELBScheme"
value = "internal"
resource = ""
}
setting {
namespace = "aws:elbv2:listener:443"
name = "SSLCertificateArns"
value = aws_acm_certificate.internal.arn
resource = ""
}
}
I then assign an internal DNS entry like this:
resource "aws_route53_zone" "private" {
name = "example.org."
vpc {
vpc_id = aws_vpc.main.id
}
}
resource "aws_route53_record" "private_cname_foo" {
zone_id = aws_route53_zone.private.zone_id
name = "foo.internal.example.org."
type = "CNAME"
ttl = "300"
records = [
aws_elastic_beanstalk_environment.foo.cname
]
}
How do I get AWS to create a certificate for me?

Related

How Do I Use A Terraform Data Source To Reference A Managed Prefix List?

I'm trying to update a terraform module to add a new security group, which will have an inbound rule populated with two managed prefix lists. The prefix lists are shared to my AWS account from a different account using AWS Resource Access Manager, however I have tried referencing prefix lists created within my own AWS account and am seeing the same error.
Below is the terraform I am using:
resource "aws_security_group" "akamai_sg" {
name = "akamai-pl-sg"
description = "Manage access from Akamai to ${var.environment} alb"
vpc_id = var.vpc_id
tags = merge(var.common_tags, tomap({ "Name" = "akamai-pl-sg" }))
revoke_rules_on_delete = true
}
resource "aws_security_group_rule" "akamai_to_internal_alb" {
for_each = toset(var.domains_inc_akamai)
type = "ingress"
description = "Allow Akamai into ${var.environment}${var.domain_name_suffix}-alb"
from_port = var.alb_listener_port
to_port = var.alb_listener_port
protocol = "tcp"
security_group_id = aws_security_group.akamai_sg.id
prefix_list_ids = [data.aws_prefix_list.akamai-site-shield.id, data.aws_prefix_list.akamai-staging.id]
}
data "aws_prefix_list" "akamai-site-shield" {
filter {
name = "prefix-list-id"
values = ["pl-xxxxxxxxxx"]
}
}
data "aws_prefix_list" "akamai-staging" {
filter {
name = "prefix-list-id"
values = ["pl-xxxxxxxxxx"]
}
}
The terraform error I am revieving reads:
"Error: no matching prefix list found; the prefix list ID or name may be invalid or not exist in the current region"
Is anyone able to help, or see where I am going wrong?
Thanks in advance.
Would not be the following possible?
data "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.foo.id
service_name = "com.amazonaws.us-west-2.s3"
}
data "aws_prefix_list" "s3" {
prefix_list_id = aws_vpc_endpoint.s3.prefix_list_id
}
It seems the solution is to use:
data "aws_ec2_managed_prefix_list" "example" {
filter {
name = "prefix-list-name"
values = ["my-prefix-list"]
}
}

What are all the things that I need to connect an existing known-good API gateway endpoint to a Route53 subdomain with Terraform?

Here's the code I have so far, hopefully I got everything relevant. The API gateway is deployed and working and has been for a while now. Our app is currently pointing at the xxxyyyzz12.execute-api.us-west-2.amazonaws.com endpoint and working fine. But I need to route it to the subdomain ui-backend.app-name-here-dev.company.services.
data "aws_acm_certificate" "app_name_dev_wildcard_cert" {
domain = "*.app-name-here-dev.company.services"
statuses = ["ISSUED"]
}
// pull in the existing zone (defined by devops) via a data block
data "aws_route53_zone" "myapp_zone" {
name = local.domain
}
resource "aws_route53_record" "ui_backend" {
name = aws_apigatewayv2_domain_name.ui_backend_api_gateway.domain_name
type = "A"
zone_id = data.aws_route53_zone.myapp_zone.zone_id
alias {
name = aws_apigatewayv2_domain_name.ui_backend_api_gateway.domain_name_configuration[0].target_domain_name
zone_id = aws_apigatewayv2_domain_name.ui_backend_api_gateway.domain_name_configuration[0].hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_apigatewayv2_domain_name" "ui_backend_api_gateway" {
domain_name = "${local.subdomain}.${local.domain}"
domain_name_configuration {
certificate_arn = data.aws_acm_certificate.app_name_dev_wildcard_cert.arn
endpoint_type = "REGIONAL"
security_policy = "TLS_1_2"
}
}
locals {
// trimmed
domain = "app-name-here${var.envToZoneName[var.environment]}.company.services"
subdomain = var.deploymentNameModifier == "" ? "ui-backend" : "ui-backend-${var.deploymentNameModifier}"
}
But when I try to use the curl (the one that works for xxxyyyzz12.execute-api.us-west-2.amazonaws.com) I'm getting a 403. I added a x-apigw-api-id: 153utdsv9h header but it didn't help. I must be missing a resource.
Well, 16 hrs have gone by with no answers/comments. Here's the thing that was missing:
resource "aws_apigatewayv2_api_mapping" "ui_backend_to_subdomain" {
api_id = aws_apigatewayv2_api.ui_backend_gateway.id
domain_name = aws_apigatewayv2_domain_name.ui_backend_api_gateway.domain_name
stage = aws_apigatewayv2_stage.ui_backend.id
}

Want to create a cloud sql instance with private and public ip on a separate vpc in gcp using terraform

I tried to configure a cloud sql instance with private and public ip both in separate vpc using terraform. Can able to assign private ip on that instance from separate vpc but unable to assign public ip along with that.
Here is my code.
resource "google_compute_global_address" "private_ip_address" {
provider = google-beta
name = "private-ip-address"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = "${var.vpc_self_link}"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
network = "${var.vpc_self_link}"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
# create database instance
resource "google_sql_database_instance" "instance" {
name = "test-${var.project_id}"
region = "us-central1"
database_version = "${var.db_version}"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "${var.db_tier}"
activation_policy = "${var.db_activation_policy}"
disk_autoresize = "${var.db_disk_autoresize}"
disk_size = "${var.db_disk_size}"
disk_type = "${var.db_disk_type}"
pricing_plan = "${var.db_pricing_plan}"
database_flags {
name = "slow_query_log"
value = "on"
}
ip_configuration {
ipv4_enabled = "false"
private_network = "projects/${var.project_id}/global/networks/${var.vpc_name}"
}
}
}
but when I try to pass below parameter - to assign public ip, it gives error because of private_network flag .
ipv4_enabled = "true"
Please let me know how to figure out the issue with private and public ip from custom or separate vpc (not the default one).
According with the documentation, you can't
ipv4_enabled - (Optional) Whether this Cloud SQL instance should be assigned a public IPV4 address. Either ipv4_enabled must be enabled or a private_network must be configured.
Open a feature request.
the question is old, a lot of updates have been done, and you probably solved it by then, nonetheless just wanted to confirm that below is working having both public and private IP, and working in both scenarios of creating the resources from scratch, or modifying an existing instance previously using only public ip.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.5.0"
}
}
backend "gcs" {
bucket = "<BUCKET>"
prefix = "<PREFIX>"
}
}
provider "google" {
project = var.project
region = var.region
zone = var.zone
}
### VPC
resource "google_compute_network" "private_network" {
name = "private-network"
auto_create_subnetworks = "false"
}
resource "google_compute_global_address" "private_ip_address" {
name = "private-ip-address"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = google_compute_network.private_network.id
}
resource "google_service_networking_connection" "private_vpc_connection" {
network = google_compute_network.private_network.id
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
### INSTANCE
resource "google_sql_database_instance" "instance" {
name = "<INSTANCE>"
region = var.region
database_version = "MYSQL_5_7"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
private_network = google_compute_network.private_network.id
authorized_networks {
name = "default"
value = "0.0.0.0/0"
}
}
}
}
### DATABASE
resource "google_sql_database" "database" {
name = "tf-db"
instance = google_sql_database_instance.instance.name
}
### USER
resource "google_sql_user" "users" {
name = var.sql_user
password = var.sql_pw
instance = google_sql_database_instance.instance.name
}

How to load balance google compute instance using terraform?

In my terraform configuration file, I define my resource like so:
resource "google_compute_instance" "test" {
...
count = 2
}
What I now want is to create load balancer, that will balance between two instances of my google compute instance. Unfortunatelly, I could not find in documentation anything relative to this task. It seems like google_compute_target_pool or google_compute_lb_ip_ranges have nothing to do with my problem.
You would have to use 'forwarding rules' as indicated on this terraform document. To use load balancing and protocol forwarding, you must create a forwarding rule that directs traffic to specific target instances. The use on Cloud Platform of forwarding rules you can find here.
In common cases you can use something like the following:
resource "google_compute_instance" "test" {
name = "nlb-node${count.index}"
zone = "europe-west3-b"
machine_type = "f1-micro"
count = 2
boot_disk {
auto_delete = true
initialize_params {
image = "ubuntu-os-cloud/ubuntu-1604-lts"
size = 10
type = "pd-ssd"
}
}
network_interface {
subnetwork = "default"
access_config {
nat_ip = ""
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}
resource "google_compute_http_health_check" "nlb-hc" {
name = "nlb-health-checks"
request_path = "/"
port = 80
check_interval_sec = 10
timeout_sec = 3
}
resource "google_compute_target_pool" "nlb-target-pool" {
name = "nlb-target-pool"
session_affinity = "NONE"
region = "europe-west3"
instances = [
"${google_compute_instance.test.*.self_link}"
]
health_checks = [
"${google_compute_http_health_check.nlb-hc.name}"
]
}
resource "google_compute_forwarding_rule" "network-load-balancer" {
name = "nlb-test"
region = "europe-west3"
target = "${google_compute_target_pool.nlb-target-pool.self_link}"
port_range = "80"
ip_protocol = "TCP"
load_balancing_scheme = "EXTERNAL"
}
You can get load balancer external ip via ${google_compute_forwarding_rule.network-load-balancer.ip_address}
// output.tf
output "network_load_balancer_ip" {
value = "${google_compute_forwarding_rule.network-load-balancer.ip_address}"
}

is there any way I associate aws ELB/ALB with WAF ACL using terraform?

I created the following AWS WAF ACL and I want to associate it with my ALB using terraform. is there any way I can do it using terraform?
I want to block all requests except the ones that have secret key using amazon web service web application firewalls, aws waf. For that purpose, I created byte_set, aws rule and access control lists, ACL
resource "aws_alb" "app" {
............
}
#waf
resource "aws_waf_byte_match_set" "byte_set" {
name = "tf_waf_byte_match_set"
byte_match_tuples {
text_transformation = "NONE"
target_string = "${var.secret_key}"
positional_constraint = "EXACTLY"
field_to_match {
type = "HEADER"
data = "referer"
}
}
}
resource "aws_waf_rule" "wafrule" {
depends_on = ["aws_waf_byte_match_set.byte_set"]
name = "tfWAFRule"
metric_name = "tfWAFRule"
predicates {
data_id = "${aws_waf_byte_match_set.byte_set.id}"
negated = false
type = "ByteMatch"
}
}
resource "aws_waf_web_acl" "waf_acl" {
depends_on = ["aws_waf_byte_match_set.byte_set", "aws_waf_rule.wafrule"]
name = "tfWebACL"
metric_name = "tfWebACL"
default_action {
type = "BLOCK"
}
rules {
action {
type = "ALLOW"
}
priority = 1
rule_id = "${aws_waf_rule.wafrule.id}"
}
}
Sure, here is an example of the resource for the WAFv2 (I recommend to use this one) with a rate limit example rule and the association with an ALB:
########### This is the creation of an WAFv2 (Web ACL) and a example rate limit rule
resource "aws_wafv2_web_acl" "my_web_acl" {
name = "my-web-acl"
scope = "REGIONAL"
default_action {
allow {}
}
rule {
name = "RateLimit"
priority = 1
action {
block {}
}
statement {
rate_based_statement {
aggregate_key_type = "IP"
limit = 500
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "RateLimit"
sampled_requests_enabled = true
}
}
visibility_config {
cloudwatch_metrics_enabled = false
metric_name = "my-web-acl"
sampled_requests_enabled = false
}
}
########### This is the association code
resource "aws_wafv2_web_acl_association" "web_acl_association_my_lb" {
resource_arn = aws_lb.my_lb.arn
web_acl_arn = aws_wafv2_web_acl.my_web_acl.arn
}
You can associate a WAF with ALB (Application Load Balancer) and with CloudFront, you cannot associate with an ELB (Classic Elastic Load Balancer).
To associate with ALB this is the piece of code
resource "aws_wafregional_web_acl_association" "foo" {
resource_arn = "${aws_alb.foo.arn}"
web_acl_id = "${aws_wafregional_web_acl.foo.id}"
}
taken from the official documentation
This feature has been proposed but not merged yet.
https://github.com/hashicorp/terraform/issues/10713
https://github.com/hashicorp/terraform/pull/11263