We are creating infrastructure on GCP for our application which uses SSL Porxy Load Balancer on GCP. We use Terraform for our deployments and are struggling to create SSL Proxy Load Balancer via terraform.
If anyone could point to a sample code or point me to a direction where I can find some resources to create the load balancer
You can try with the following example:
resource "google_compute_target_ssl_proxy" "default" {
name = "test-proxy"
backend_service = google_compute_backend_service.default.id
ssl_certificates = [google_compute_ssl_certificate.default.id]
}
resource "google_compute_ssl_certificate" "default" {
name = "default-cert"
private_key = file("path/to/private.key")
certificate = file("path/to/certificate.crt")
}
resource "google_compute_backend_service" "default" {
name = "backend-service"
protocol = "SSL"
health_checks = [google_compute_health_check.default.id]
}
resource "google_compute_health_check" "default" {
name = "health-check"
check_interval_sec = 1
timeout_sec = 1
tcp_health_check {
port = "443"
}
}
Take onto consideration that the Health Check is pointed to port 443/tcp, if you want a different port please change that on here.
Related
My situation is as follows:
I’m creating a AWS ECS (NGINX container for SFTP traffic) FARGATE setup with Terraform with a network load balancer infront of it. I’ve got most parts set-up just fine and the current setup works. But now i want to add more target groups to the setup so i can allow more different ports to the container. My variable part is as follows:
variable "sftp_ports" {
type = map
default = {
test1 = {
port = 50003
}
test2 = {
port = 50004
}
}
}
and the actual deployment is as follows:
resource "aws_alb_target_group" "default-target-group" {
name = local.name
port = var.sftp_test_port
protocol = "TCP"
target_type = "ip"
vpc_id = data.aws_vpc.default.id
depends_on = [
aws_lb.proxypoc
]
}
resource "aws_alb_target_group" "test" {
for_each = var.sftp_ports
name = "sftp-target-group-${each.key}"
port = each.value.port
protocol = "TCP"
target_type = "ip"
vpc_id = data.aws_vpc.default.id
depends_on = [
aws_lb.proxypoc
]
}
resource "aws_alb_listener" "ecs-alb-https-listenertest" {
for_each = var.sftp_ports
load_balancer_arn = aws_lb.proxypoc.id
port = each.value.port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_alb_target_group.default-target-group.arn
}
}
This deploys the needed listeners and the target groups just fine but the only problem i have is on how i can configure the registered target part. The aws ecs service resource only allows one target group arn so i have no clue on how i can add the additional target groups in order to reach my goal. I've been wrapping my head around this problem and scoured the internet but so far .. nothing. So is it possible to configure the ecs service to contain more target groups arns or am i supposed to configure only single target group with multiple ports (as far as i know, this is not supported out of the box, checked the docs as well. But it is possible to add multiple registered targets in the GUI so i guess it is a possibility)?
I’d like to hear from you guys,
Thanks!
I currently have the following problem. I have a backend that is behind an nginx-ingress controller used as load balancer in aws. Usually i should get the users real ip by either the header x-forwarded-for or x-real-ip. However this ip always points to the ip of the ingress controller in my case. I use terraform to setup my architecture.
This is my current configuration
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
namespace = "ingress"
create_namespace = true
chart = "ingress-nginx"
version = "4.0.13"
repository = "https://kubernetes.github.io/ingress-nginx"
values = [
<<-EOF
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
admissionWebhooks:
enabled: false
EOF
]
}
data "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
namespace = helm_release.ingress_nginx.metadata[0].namespace
}
}
data "aws_lb" "ingress_nginx" {
name = regex(
"(^[^-]+)",
data.kubernetes_service.ingress_nginx.status[0].load_balancer[0].ingress[0].hostname
)[0]
}
output "lb" {
value = data.aws_lb.ingress_nginx.name
}`
Does anybody know how to fix the header in this config?
To get the real IP address in the header both the ingress controller and the NLB (Network Load Balancer) need to use Proxy Protocol. To do this, in terraform you need to configure the ingress controller with the proxy-protocol option:
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
namespace = "ingress"
create_namespace = true
chart = "ingress-nginx"
version = "4.0.13"
repository = "https://kubernetes.github.io/ingress-nginx"
values = [
<<-EOF
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
config:
use-forwarded-headers: true
use-proxy-protocol: true
enable-real-ip: true
admissionWebhooks:
enabled: false
EOF
]
}
data "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
namespace = helm_release.ingress_nginx.metadata[0].namespace
}
}
data "aws_lb" "ingress_nginx" {
name = regex(
"(^[^-]+)",
data.kubernetes_service.ingress_nginx.status[0].load_balancer[0].ingress[0].hostname
)[0]
}
output "lb" {
value = data.aws_lb.ingress_nginx.name
}
Now, it seems that currently there is no way to get the NLB to use proxy protocol by annotating the service.
There are some issues in github mentioning that the following annotations would solve it:
*service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"*
and
*service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"*
However, these have not worked for me. So, I have enabled proxy protocol manually though the AWS Console by doing the following:
AWS console --> EC2 --> Target groups under load balancing --> Select Target groups associated with your NLB --> Attributes --> Enable proxy protocol 2
I guess that in terraform the optimal solution would be to explicitly create a Network Load Balancer with the correct target group and associate it with the ingress controller service.
Doing this, now I get the correct IP addresses of the clients connecting to the ingress controller.
I'm trying to learn wireguard. I found this great tutorial on how to install it on GCP ....
https://sreejithag.medium.com/set-up-wireguard-vpn-with-google-cloud-57bb3267a6ef
Very basic (for somebody new to wireguard) but it did work. The tutorial shows a vm being provisioned with ip forwarding.Through the GCP web interface
I wanted to set this up with terraform. I've searched the terraform registry and found this...
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_forwarding_rule
Heres's my main.tf with the virtual machine provisioning. Where would I put something like ip forwarding? Without terraform complaining?
code---
# This is the provider used to spin up the gcloud instance
provider "google" {
project = var.project_name
region = var.region_name
zone = var.zone_name
credentials = "mycredentials.json"
}
# Locks the version of Terraform for this particular use case
terraform {
required_version = "0.14.6"
}
# This creates the google instance
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
tags = ["allow-http", "allow-https", "allow-dns", "allow-tor", "allow-ssh", "allow-2277", "allow-mosh", "allow-whois", "allow-openvpn", "allow-wireguard"] # FIREWALL
boot_disk {
initialize_params {
image = var.image_name
size = var.disk_size_gb
}
}
network_interface {
network = "default"
# Associated our public IP address to this instance
access_config {
nat_ip = google_compute_address.static.address
}
}
# We connect to our instance via Terraform and remotely executes our script using SSH
provisioner "remote-exec" {
script = var.script_path
connection {
type = "ssh"
host = google_compute_address.static.address
user = var.username
private_key = file(var.private_key_path)
}
}
}
# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
name = "vm-public-address"
}
For WireGuard, you need to enable IP Forwarding. The resource you are trying to use is for HTTP(S) Load Balancers.
Instead enable the google_compute_instance resource attribute can_ip_forward.
can_ip_forward - (Optional) Whether to allow sending and receiving of
packets with non-matching source or destination IPs. This defaults to
false.
can_ip_forward
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
can_ip_forward = true
....
}
Using terraform I'm provisioning infra in AWS for my K3S cluster. I have provisioned an NLB with two listeners on port 80 and 443, with appropriate self-signed certs. This works. I can access HTTP services in my cluster via the nlb.
resource "tls_private_key" "agents" {
algorithm = "RSA"
}
resource "tls_self_signed_cert" "agents" {
key_algorithm = "RSA"
private_key_pem = tls_private_key.agents.private_key_pem
validity_period_hours = 24
subject {
common_name = "my hostname"
organization = "My org"
}
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth"
]
}
resource "aws_acm_certificate" "agents" {
private_key = tls_private_key.agents.private_key_pem
certificate_body = tls_self_signed_cert.agents.cert_pem
}
resource "aws_lb" "agents" {
name = "basic-load-balancer"
load_balancer_type = "network"
subnet_mapping {
subnet_id = aws_subnet.agents.id
allocation_id = aws_eip.agents.id
}
}
resource "aws_lb_listener" "agents_80" {
load_balancer_arn = aws_lb.agents.arn
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.agents_80.arn
}
}
resource "aws_lb_listener" "agents_443" {
load_balancer_arn = aws_lb.agents.arn
protocol = "TLS"
port = 443
certificate_arn = aws_acm_certificate.agents.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.agents_443.arn
}
}
resource "aws_lb_target_group" "agents_80" {
port = 30000
protocol = "TCP"
vpc_id = var.vpc.id
depends_on = [
aws_lb.agents
]
}
resource "aws_lb_target_group" "agents_443" {
port = 30001
protocol = "TCP"
vpc_id = var.vpc.id
depends_on = [
aws_lb.agents
]
}
resource "aws_autoscaling_attachment" "agents_80" {
autoscaling_group_name = aws_autoscaling_group.agents.name
alb_target_group_arn = aws_lb_target_group.agents_80.arn
}
resource "aws_autoscaling_attachment" "agents_443" {
autoscaling_group_name = aws_autoscaling_group.agents.name
alb_target_group_arn = aws_lb_target_group.agents_443.arn
}
That's a cutdown version of my code.
I have configured my ingress controller to listen for HTTP and HTTPS on NodePorts 30000 and 30001 respectively. This works too.
The thing that doesn't work is that the NLB is terminating TLS, but I need it to passthrough. I'm doing this so that I can access Kubernetes Dashboard (among other apps), but the dashboard requires https to sign-in, something I can't provide if tls is terminated at the nlb.
I need help configuring the nlb for passthrough. I have searched and searched and can't find any examples. If anyone knows how to configure this it would be good to get some tf code, or even just an idea of the appropriate way of achieving it in AWS so that I can implement it myself in tf.
Do you need TLS passthrough, or just TLS communication between the NLB and the server? Or do you just need to configure your server to be aware that the initial connection was TLS?
For TLS passthrough you would install an SSL certificate on the server, and delete the certificate from the load balancer. You would change the protocol of the port 443 listener on the load balancer from "TLS" to "TCP". This is not a very typical setup on AWS, and you can't use the free AWS ACM SSL certificates in this configuration, you would have to use something like Let's Encrypt on the server.
For TLS communication between the NLB and the server, you would install a certificate on the server, a self-signed cert is fine for this, and then just change the target group settings on the load balancer to point to the secure ports on the server.
If you just want to make the server aware that the initial connection protocol was TLS, you would configure the server to use the x-forwarded-proto header passed by the load balancer to determine if the connection is secure.
I have 2 services within ECS Fargate running.
I have set up service discovery with a private dns namespace as all my services are within a private subnet.
When I try and hit my config container from another I am getting the following error.
http://config.qcap-prod:50050/config: Get
"http://config.qcap-prod:50050/config": dial tcp: lookup
config.qcap-prod on 10.0.0.2:53: no such host
Below is my Terraform
resource "aws_service_discovery_service" "config" {
name = "config"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.qcap_prod_sd.id
dns_records {
ttl = 10
type = "A"
}
}
health_check_custom_config {
failure_threshold = 1
}
}
Is there another step I need to do to allow me to hit my container from another within ECS using Fargate?
My terraform code for my namespace is:
resource "aws_service_discovery_private_dns_namespace" "qcap_prod_sd" {
name = "qcap.prod"
description = "Qcap prod service discovery"
vpc = module.vpc.vpc_id
}
The fix for this was to add
module "vpc" {
enable_dns_support = true
enable_dns_hostnames = true
}
In the module block within the vpc module to allow the DNS hostnames to be resolved within my VPC