I currently have the following problem. I have a backend that is behind an nginx-ingress controller used as load balancer in aws. Usually i should get the users real ip by either the header x-forwarded-for or x-real-ip. However this ip always points to the ip of the ingress controller in my case. I use terraform to setup my architecture.
This is my current configuration
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
namespace = "ingress"
create_namespace = true
chart = "ingress-nginx"
version = "4.0.13"
repository = "https://kubernetes.github.io/ingress-nginx"
values = [
<<-EOF
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
admissionWebhooks:
enabled: false
EOF
]
}
data "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
namespace = helm_release.ingress_nginx.metadata[0].namespace
}
}
data "aws_lb" "ingress_nginx" {
name = regex(
"(^[^-]+)",
data.kubernetes_service.ingress_nginx.status[0].load_balancer[0].ingress[0].hostname
)[0]
}
output "lb" {
value = data.aws_lb.ingress_nginx.name
}`
Does anybody know how to fix the header in this config?
To get the real IP address in the header both the ingress controller and the NLB (Network Load Balancer) need to use Proxy Protocol. To do this, in terraform you need to configure the ingress controller with the proxy-protocol option:
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
namespace = "ingress"
create_namespace = true
chart = "ingress-nginx"
version = "4.0.13"
repository = "https://kubernetes.github.io/ingress-nginx"
values = [
<<-EOF
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
config:
use-forwarded-headers: true
use-proxy-protocol: true
enable-real-ip: true
admissionWebhooks:
enabled: false
EOF
]
}
data "kubernetes_service" "ingress_nginx" {
metadata {
name = "ingress-nginx-controller"
namespace = helm_release.ingress_nginx.metadata[0].namespace
}
}
data "aws_lb" "ingress_nginx" {
name = regex(
"(^[^-]+)",
data.kubernetes_service.ingress_nginx.status[0].load_balancer[0].ingress[0].hostname
)[0]
}
output "lb" {
value = data.aws_lb.ingress_nginx.name
}
Now, it seems that currently there is no way to get the NLB to use proxy protocol by annotating the service.
There are some issues in github mentioning that the following annotations would solve it:
*service.beta.kubernetes.io/aws-load-balancer-type: "nlb-ip"*
and
*service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"*
However, these have not worked for me. So, I have enabled proxy protocol manually though the AWS Console by doing the following:
AWS console --> EC2 --> Target groups under load balancing --> Select Target groups associated with your NLB --> Attributes --> Enable proxy protocol 2
I guess that in terraform the optimal solution would be to explicitly create a Network Load Balancer with the correct target group and associate it with the ingress controller service.
Doing this, now I get the correct IP addresses of the clients connecting to the ingress controller.
Related
I want to expose a few webapps in EKS to the internet in a centrally managed secure way.
In AWS, using an ALB is nice, as it for example allows you to terminate TLS and add authentication using Cognito. (see here)
To provision an ALB and connect it to the application there is the aws-load-balancer-controller.
It works fine, but it requires for each and every app/ingress to configure a new ALB:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=test,Project=cognito
external-dns.alpha.kubernetes.io/hostname: sample.${COK_MY_DOMAIN}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/auth-type: cognito
alb.ingress.kubernetes.io/auth-scope: openid
alb.ingress.kubernetes.io/auth-session-timeout: '3600'
alb.ingress.kubernetes.io/auth-session-cookie: AWSELBAuthSessionCookie
alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate
alb.ingress.kubernetes.io/auth-idp-cognito: '{"UserPoolArn": "$(aws cognito-idp describe-user-pool --user-pool-id $COK_COGNITO_USER_POOL_ID --region $COK_AWS_REGION --query 'UserPool.Arn' --output text)","UserPoolClientId":"${COK_COGNITO_USER_POOL_CLIENT_ID}","UserPoolDomain":"${COK_COGNITO_DOMAIN}.auth.${COK_AWS_REGION}.amazoncognito.com"}'
alb.ingress.kubernetes.io/certificate-arn: $COK_ACM_CERT_ARN
alb.ingress.kubernetes.io/target-type: 'ip'
I would love to have one central well defined ALB and all the application do not need to care about this anymore.
My idea was having a regular nginx-ingress-controller and expose it via a central ALB.
Now the question is: How do I connect the ALB to the nginx-controller?
One way would be manually configuring the ALB and build the target group by hand, which does not feel like a stable solution.
Another way would be using aws-load-balancer-controller to connect the nginx. In that case however nginx seems not to be able to publish the correct loadbalancer address and external-dns will enter the wrong DNS records. (Unfortunately there seems to be no --publish-ingress option in usual ingress controllers like nginx or traefik.)
Question:
Is there a way to make the nginx-ingress-controller provide the correct address?
Is there maybe an easier way that combining two ingress controllers?
I think I found a good solution.
I set up my environment using terraform.
After I set up the alb ingress controller, I can create a suitable ingress object, wait until the ALB is up, use terraform to extract the address of the ALB and use publish-status-address to tell nginx to publish exactly that address to all its ingresses:
resource "kubernetes_ingress_v1" "alb" {
wait_for_load_balancer = true
metadata {
name = "alb"
namespace = "kube-system"
annotations = {
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\": 80}, {\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/ssl-redirect" = "443"
"alb.ingress.kubernetes.io/certificate-arn" = local.cert
"alb.ingress.kubernetes.io/target-type" = "ip"
}
}
spec {
ingress_class_name = "alb"
default_backend {
service {
name = "ing-nginx-ingress-nginx-controller"
port {
name = "http"
}
}
}
}
}
resource "helm_release" "ing-nginx" {
name = "ing-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "kube-system"
set {
name = "controller.service.type"
value = "ClusterIP"
}
set {
name = "controller.publishService.enabled"
value = "false"
}
set {
name = "controller.extraArgs.publish-status-address"
value = kubernetes_ingress_v1.alb.status.0.load_balancer.0.ingress.0.hostname
}
set {
name = "controller.config.use-forwarded-headers"
value = "true"
}
set {
name = "controller.ingressClassResource.default"
value = "true"
}
}
It is a bit weird, as it introduces something like a circular dependency, but the ingress simply waits until nginx is finally up and all is well.
This solution in not exactly the same as the --publish-ingress option as it will not be able to adapt to any changes of the ALB address. - Luckily I don't expect that address to change, so I'm fine with that solution.
You can achieve this with two ingress controllers. The ALB ingress controller will hand the publicly exposed endpoint and route traffic to the nginx ingress controller as its backend. Then you configure your nginx ingress controller for managing ingress of application traffic.
I got a microservice in an ECS instance in AWS behind a WAF, I want to create these rules:
Allow specific IPs (done)
Allow all connections from inside the VPN (done)
Deny all the other requests.
The first two IP set are created, but I can't make the last one to work. I tried creating the IP set with 0.0.0.0/0 and another combinations without success.
This is my code, I removed ipset 1 and 2 (that are working), this is the ipset 3:
resource "aws_wafv2_ip_set" "ipset" {
name = "${var.app_name}-${var.environment_name}-whitelist-ips"
scope = "REGIONAL"
ip_address_version = "IPV4"
addresses = ["0.0.0.0/0"]
}
module "alb_wafv2" {
source = "trussworks/wafv2/aws"
version = "~> 2.0"
name = "${var.app_name}-${var.environment_name}"
scope = "REGIONAL"
alb_arn = aws_lb.app_lb.arn
associate_alb = true
ip_sets_rule = [
{
name = "${var.app_name}-${var.environment_name}-ip-blacklist"
action = "deny"
priority = 1
ip_set_arn = aws_wafv2_ip_set.ipset.arn
}
]
}
{
RespMetadata: {
StatusCode: 400,
RequestID: "c98b2d3a-ebd0-44e0-a80a-702bc698598b"
},
Field: "IP_ADDRESS",
Message_: "Error reason: The parameter contains formatting that is not valid., field: IP_ADDRESS, parameter: 0.0.0.0/0",
Parameter: "0.0.0.0/0",
Reason: "The parameter contains formatting that is not valid."
}
Tried to create an IP Set from the AWS Console with the same error:
So I got two questions, first, how can I do this? And the second one, is this the best approach?
Thanks in advance
You don't need to block 0.0.0.0/0. After you created two IP rules, look "Default web ACL action for requests that don't match any rules" on WAF console and set Action to Block.
Consider using this trick to bypass the 0.0.0.0/0 limitation:
Divide the IPv4 address space into two chunks: 0.0.0.0/1 and 128.0.0.0/1
The following terraform snippet was accepted and the ip set was created by TF (Terraform 0.15.4 and aws provider version 3.42.0):
resource "aws_wafv2_ip_set" "ipset" {
name = "all_internet_kludge"
scope = "REGIONAL"
ip_address_version = "IPV4"
addresses = ["0.0.0.0/1", "128.0.0.0/1"]
}
You can't block all addresses (CIDR /0). It is not supported. From docs:
AWS WAF supports all IPv4 and IPv6 CIDR ranges except for /0.
Instead, you can use network ACL to deny all traffic, or security groups.
Using terraform I'm provisioning infra in AWS for my K3S cluster. I have provisioned an NLB with two listeners on port 80 and 443, with appropriate self-signed certs. This works. I can access HTTP services in my cluster via the nlb.
resource "tls_private_key" "agents" {
algorithm = "RSA"
}
resource "tls_self_signed_cert" "agents" {
key_algorithm = "RSA"
private_key_pem = tls_private_key.agents.private_key_pem
validity_period_hours = 24
subject {
common_name = "my hostname"
organization = "My org"
}
allowed_uses = [
"key_encipherment",
"digital_signature",
"server_auth"
]
}
resource "aws_acm_certificate" "agents" {
private_key = tls_private_key.agents.private_key_pem
certificate_body = tls_self_signed_cert.agents.cert_pem
}
resource "aws_lb" "agents" {
name = "basic-load-balancer"
load_balancer_type = "network"
subnet_mapping {
subnet_id = aws_subnet.agents.id
allocation_id = aws_eip.agents.id
}
}
resource "aws_lb_listener" "agents_80" {
load_balancer_arn = aws_lb.agents.arn
protocol = "TCP"
port = 80
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.agents_80.arn
}
}
resource "aws_lb_listener" "agents_443" {
load_balancer_arn = aws_lb.agents.arn
protocol = "TLS"
port = 443
certificate_arn = aws_acm_certificate.agents.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.agents_443.arn
}
}
resource "aws_lb_target_group" "agents_80" {
port = 30000
protocol = "TCP"
vpc_id = var.vpc.id
depends_on = [
aws_lb.agents
]
}
resource "aws_lb_target_group" "agents_443" {
port = 30001
protocol = "TCP"
vpc_id = var.vpc.id
depends_on = [
aws_lb.agents
]
}
resource "aws_autoscaling_attachment" "agents_80" {
autoscaling_group_name = aws_autoscaling_group.agents.name
alb_target_group_arn = aws_lb_target_group.agents_80.arn
}
resource "aws_autoscaling_attachment" "agents_443" {
autoscaling_group_name = aws_autoscaling_group.agents.name
alb_target_group_arn = aws_lb_target_group.agents_443.arn
}
That's a cutdown version of my code.
I have configured my ingress controller to listen for HTTP and HTTPS on NodePorts 30000 and 30001 respectively. This works too.
The thing that doesn't work is that the NLB is terminating TLS, but I need it to passthrough. I'm doing this so that I can access Kubernetes Dashboard (among other apps), but the dashboard requires https to sign-in, something I can't provide if tls is terminated at the nlb.
I need help configuring the nlb for passthrough. I have searched and searched and can't find any examples. If anyone knows how to configure this it would be good to get some tf code, or even just an idea of the appropriate way of achieving it in AWS so that I can implement it myself in tf.
Do you need TLS passthrough, or just TLS communication between the NLB and the server? Or do you just need to configure your server to be aware that the initial connection was TLS?
For TLS passthrough you would install an SSL certificate on the server, and delete the certificate from the load balancer. You would change the protocol of the port 443 listener on the load balancer from "TLS" to "TCP". This is not a very typical setup on AWS, and you can't use the free AWS ACM SSL certificates in this configuration, you would have to use something like Let's Encrypt on the server.
For TLS communication between the NLB and the server, you would install a certificate on the server, a self-signed cert is fine for this, and then just change the target group settings on the load balancer to point to the secure ports on the server.
If you just want to make the server aware that the initial connection protocol was TLS, you would configure the server to use the x-forwarded-proto header passed by the load balancer to determine if the connection is secure.
We are creating infrastructure on GCP for our application which uses SSL Porxy Load Balancer on GCP. We use Terraform for our deployments and are struggling to create SSL Proxy Load Balancer via terraform.
If anyone could point to a sample code or point me to a direction where I can find some resources to create the load balancer
You can try with the following example:
resource "google_compute_target_ssl_proxy" "default" {
name = "test-proxy"
backend_service = google_compute_backend_service.default.id
ssl_certificates = [google_compute_ssl_certificate.default.id]
}
resource "google_compute_ssl_certificate" "default" {
name = "default-cert"
private_key = file("path/to/private.key")
certificate = file("path/to/certificate.crt")
}
resource "google_compute_backend_service" "default" {
name = "backend-service"
protocol = "SSL"
health_checks = [google_compute_health_check.default.id]
}
resource "google_compute_health_check" "default" {
name = "health-check"
check_interval_sec = 1
timeout_sec = 1
tcp_health_check {
port = "443"
}
}
Take onto consideration that the Health Check is pointed to port 443/tcp, if you want a different port please change that on here.
I have 2 services within ECS Fargate running.
I have set up service discovery with a private dns namespace as all my services are within a private subnet.
When I try and hit my config container from another I am getting the following error.
http://config.qcap-prod:50050/config: Get
"http://config.qcap-prod:50050/config": dial tcp: lookup
config.qcap-prod on 10.0.0.2:53: no such host
Below is my Terraform
resource "aws_service_discovery_service" "config" {
name = "config"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.qcap_prod_sd.id
dns_records {
ttl = 10
type = "A"
}
}
health_check_custom_config {
failure_threshold = 1
}
}
Is there another step I need to do to allow me to hit my container from another within ECS using Fargate?
My terraform code for my namespace is:
resource "aws_service_discovery_private_dns_namespace" "qcap_prod_sd" {
name = "qcap.prod"
description = "Qcap prod service discovery"
vpc = module.vpc.vpc_id
}
The fix for this was to add
module "vpc" {
enable_dns_support = true
enable_dns_hostnames = true
}
In the module block within the vpc module to allow the DNS hostnames to be resolved within my VPC