external dns configuration for multiple env - amazon-web-services

How Can I setup a terraform external-dns config for multiple environments (dev/staging/pre-prod)
module "eks-external-dns" {
source = "lablabs/eks-external-dns/aws"
version = "1.0.0"
namespace = "kube-system"
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
settings = {
"policy" = "sync"
"source"= "service"
"source"= "ingress"
"log-level"= "verbose"
"log-format"= "text"
"interval"= "1m"
"provider" = "aws"
"aws-zone-type" = "public"
"registry" = "txt"
"txt-owner-id" = "XXXXXXXXXXXXXX"
}
}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "3"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:xxx:certificate/aaaa-bbb-ccc-dd-ffff
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: createdBy=aws-controller
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
external-dns.alpha.kubernetes.io/hostname: keycloak-ingress-controller
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/load-balancer-name: acme-lb
alb.ingress.kubernetes.io/group.name: acme-group
name: keycloak-ingress-controller
spec:
rules:
- host: dev.keycloak.acme.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
In my current situation, only my x.domain is processed by external-dns
I want to be able to let it work with urls like dev.myapp.example.com staging.myapp.example.com ...

I have resolved it by using helm values directly instead of settings.
module "eks-external-dns" {
source = "lablabs/eks-external-dns/aws"
version = "1.0.0"
# insert the 2 required variables here
namespace = "kube-system"
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
values = yamlencode({
"sources" : ["service", "ingress"]
"logLevel" : "debug"
"provider" : "aws"
"registry" : "txt"
"txtOwnerId" : "xxxx"
"txtPrefix" : "external-dns"
"policy" : "sync"
"domainFilters" : [
"acme.com"
]
"publishInternalServices" : "true"
"triggerLoopOnEvent" : "true"
"interval" : "15s"
"podLabels" : {
"app" : "aws-external-dns-helm"
}
})
}

Related

terraform data kubernetes_ingress is returning null values

I am trying to retrieve the hostname in my Application Load Balancer that I configured as ingress.
The scenario currently is: I am deploying a helm chart using terraform, and have configured an ALB as ingress. The ALB and the Helm chart was deployed normally and is working, however, I need to retrieve the hostname of this ALB to create a Route53 record pointing to this ALB. When I try to retrieve this information, it returns null values.
According to terraform's own documentation, the correct way is as follows:
data "kubernetes_ingress" "example" {
metadata {
name = "terraform-example"
}
}
resource "aws_route53_record" "example" {
zone_id = data.aws_route53_zone.k8.zone_id
name = "example"
type = "CNAME"
ttl = "300"
records = [data.kubernetes_ingress.example.status.0.load_balancer.0.ingress.0.hostname]
}
I did exactly as in the documentation (even the provider version is the latest), here is an excerpt of my code:
# Helm release resource
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
namespace = "argocd"
version = "4.9.7"
create_namespace = true
values = [
templatefile("${path.module}/settings/helm/argocd/values.yaml", {
certificate_arn = module.acm_certificate.arn
})
]
}
# Kubernetes Ingress data to retrieve de ingress hostname from helm deployment (ALB Hostname)
data "kubernetes_ingress" "argocd" {
metadata {
name = "argocd-server"
namespace = helm_release.argocd.namespace
}
depends_on = [
helm_release.argocd
]
}
# Route53 record creation
resource "aws_route53_record" "argocd" {
name = "argocd"
type = "CNAME"
ttl = 600
zone_id = aws_route53_zone.r53_zone.id
records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname]
}
When I run the terraform apply I've get the following error:
╷
│ Error: Attempt to index null value
│
│ on route53.tf line 67, in resource "aws_route53_record" "argocd":
│ 67: records = [data.kubernetes_ingress.argocd.status.0.load_balancer.0.ingress.0.hostname]
│ ├────────────────
│ │ data.kubernetes_ingress.argocd.status is null
│
│ This value is null, so it does not have any indices.
My ingress configuration (deployed by Helm Release):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
uid: 646f6ea0-7991-4a13-91d0-da236164ac3e
resourceVersion: '4491'
generation: 1
creationTimestamp: '2022-08-08T13:29:16Z'
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
helm.sh/chart: argo-cd-4.9.7
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/certificate-arn: >-
arn:aws:acm:us-east-1:124416843011:certificate/7b79fa2c-d446-423d-b893-c8ff3d92a5e1
alb.ingress.kubernetes.io/group.name: altb-devops-eks-support-alb
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-name: altb-devops-eks-support-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/tags: >-
Name=altb-devops-eks-support-alb,Stage=Support,CostCenter=Infrastructure,Project=Shared
Infrastructure,Team=DevOps
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: argocd
meta.helm.sh/release-namespace: argocd
finalizers:
- group.ingress.k8s.aws/altb-devops-eks-support-alb
managedFields:
- manager: controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:16Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"group.ingress.k8s.aws/altb-devops-eks-support-alb": {}
- manager: terraform-provider-helm_v2.6.0_x5
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:16Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:alb.ingress.kubernetes.io/backend-protocol: {}
f:alb.ingress.kubernetes.io/certificate-arn: {}
f:alb.ingress.kubernetes.io/group.name: {}
f:alb.ingress.kubernetes.io/listen-ports: {}
f:alb.ingress.kubernetes.io/load-balancer-name: {}
f:alb.ingress.kubernetes.io/scheme: {}
f:alb.ingress.kubernetes.io/tags: {}
f:alb.ingress.kubernetes.io/target-type: {}
f:kubernetes.io/ingress.class: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:helm.sh/chart: {}
f:spec:
f:rules: {}
- manager: controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-08-08T13:29:20Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
selfLink: /apis/networking.k8s.io/v1/namespaces/argocd/ingresses/argocd-server
status:
loadBalancer:
ingress:
- hostname: >-
internal-altb-devops-eks122-support-alb-1845221539.us-east-1.elb.amazonaws.com
spec:
rules:
- host: argocd.altb.co
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
The terraform datasource for Ingress is : kubernetes_ingress_v1.
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/ingress_v1
data "kubernetes_ingress_v1" "argocd" {
metadata {
name = "argocd-server"
namespace = helm_release.argocd.namespace
}
depends_on = [
helm_release.argocd
]
}
This should work.

Istio EnvoyFilter HTTP_ROUTE example

I try to write EnvoyFilter for the istio-ingressgateway routes:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: retry
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: '*.example.net:8000'
route:
name: 'cfs'
patch:
operation: MERGE
value:
typed_config:
'#type': type.googleapis.com/envoy.config.route.v3.Route
route:
cluster_not_found_response_code: NOT_FOUND
This filter is not working, where did I make a mistake?
Istio v1.9.3
I expect cluster_not_found_response_code: NOT_FOUND to appear in this configuration:
$ istioctl proxy-config route istio-ingressgateway-5abc45c5cb-44l47.istio-system -o json
[
{
"name": "http.8000",
"virtualHosts": [
{
"name": "*.example.net:8000",
"domains": [
"*.example.net",
"*.example.net:8000"
],
"routes": [
{
"name": "cfs",
"match": {
"prefix": "/upload",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8000||cfs.default.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "retriable-status-codes,connect-failure,reset",
"numRetries": 4,
"retryPriority": {
"name": "envoy.retry_priorities.previous_priorities",
"typedConfig": {
"#type": "type.googleapis.com/envoy.config.retry.previous_priorities.PreviousPrioritiesConfig",
"updateFrequency": 2
}
},
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
404
]
},
"cors": {
"allowOriginStringMatch": [
{
"exact": "*"
}
],
"allowMethods": "GET,POST,DELETE,OPTIONS",
"allowHeaders": "Content-Type,Content-Disposition,Origin,Accept",
"maxAge": "86400",
"allowCredentials": false,
"filterEnabled": {
"defaultValue": {
"numerator": 100
}
}
},
"maxStreamDuration": {
"maxStreamDuration": "0s"
}
},
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking.istio.io/v1alpha3/namespaces/default/virtual-service/cara"
}
}
},
"decorator": {
"operation": "cfs.default.svc.cluster.local:8000/upload*"
},
"responseHeadersToRemove": [
"x-envoy-upstream-service-time"
]
},
...
],
"includeRequestAttemptCount": true
},
...
],
"validateClusters": false
},
...
]
Unable to change any route configuration value cluster_not_found_response_code is just an example.
On my environment , the given EnvoyFilter definition does not pass the schema validation at the CRD level:
CRD validation error while creating EnvoyFilter resource:
Warning: Envoy filter: unknown field "typed_config" in envoy.config.route.v3.Route
envoyfilter.networking.istio.io/retry-faulty created
Seems like there is no such Envoy type in V3 api like Route.
Workaround:
You may try to specific a direct response on VirtualService level, like in this github issue:
This works fine for me:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: retry-faulty
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
routeConfiguration:
vhost:
name: 'productpage.com:80'
route:
name: 'http.80'
patch:
operation: MERGE
value:
typed_config:
"#type": type.googleapis.com/envoy.config.route.v3.RouteConfiguration
route:
cluster_not_found_response_code: NOT_FOUND
Response headers from istio:
HTTP/1.1 404 Not Found
content-length: 9
content-type: text/plain
date: Fri, 11 Jun 2021 12:46:01 GMT
server: istio-envoy

redis fault injection using istio and envoy filter

I am trying to inject 2s delay to a redis instance (which is not in cluster) using istio.
So, first I am creating an ExternalName k8s service in order to reach external redis so that istio knows about this service. This works. However when I create EnvoyFilter to add fault, I don't see redis_proxy filter in istioctl pc listeners <pod-name> -o json for a pod in same namespace. (and also delay is not introduced)
apiVersion: v1
kind: Namespace
metadata:
name: chaos
labels:
istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
name: redis-proxy
namespace: chaos
spec:
type: ExternalName
externalName: redis-external.bla
ports:
- name: tcp
protocol: TCP
port: 6379
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: redis-proxy-filter
namespace: chaos
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 6379
filterChain:
filter:
name: "envoy.filters.network.redis_proxy"
patch:
operation: MERGE
value:
name: "envoy.filters.network.redis_proxy"
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
faults:
- fault_type: DELAY
fault_enabled:
default_value:
numerator: 100
denominator: HUNDRED
delay: 2s
Can someone give an idea? Thanks.
I tried your yaml in my local istio 1.8.2. Here's few changes that might help you
set PILOT_ENABLE_REDIS_FILTER in istiod env var. otherwise, the filter name will be "name": "envoy.filters.network.tcp_proxy"
add match context
match:
context: SIDECAR_OUTBOUND
use redis protocol
ports:
- name: redis-proxy
port: 6379
appProtocol: redis
I can see the following
% istioctl pc listener nginx.chaos --port 6379 -o json
[
{
"name": "0.0.0.0_6379",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 6379
}
},
"filterChains": [
{
"filters": [
{
"name": "envoy.filters.network.redis_proxy",
"typedConfig": {
"#type": "type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy",
"statPrefix": "outbound|6379||redis-proxy.chaos.svc.cluster.local",
"settings": {
"opTimeout": "5s"
},
"latencyInMicros": true,
"prefixRoutes": {
"catchAllRoute": {
"cluster": "outbound|6379||redis-proxy.chaos.svc.cluster.local"
}
},
"faults": [
{
"faultEnabled": {
"defaultValue": {
"numerator": 100
}
},
"delay": "2s"
}
]
}
}
]
}
],
"deprecatedV1": {
"bindToPort": false
},
"trafficDirection": "OUTBOUND"
}
]

Is there an k8s annotation for setting the name of an auto-created LB in AWS

For AWS cloud, I can create a Kubernetes ingress yaml containing
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: <<<my-cert-arn>>>
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/shield-advanced-protection: "true"
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
alb.ingress.kubernetes.io/tags: environment=prod,client=templar-order,name=templar-prod-app
alb.ingress.kubernetes.io/target-type: ip
and the tags come through in the AWS console, but the load balancer name is not set.
I've read the docs. What annotation can I use to set the load balancer name, here:
Unfortunately this feature is not yet supported so you can`t change the lb name using annotation.
The name is being generated here:
func (gen *NameGenerator) NameLB(namespace string, ingressName string) string {
.....
}
However there is feature request on github which looks promising. You might want to follow that case for updates.

gcp ingress fail to be created - Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found

I'm trying a simple ingress in gke.
Following the example from https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
the pods are up and running, services are active. When I create ingress I'm getting
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 48m loadbalancer-controller default/my-ingress
Warning Sync 2m32s (x25 over 48m) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
I can't find the source of the problem. Any suggestion of where to look?
I have checked cluster add-ons and permissions
httpLoadBalancing enabled
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
NAME READY STATUS RESTARTS AGE
hello-kubernetes-deployment-f6cb6cf4f-kszd9 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-lw49t 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-qqgxs 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-4c2bm 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-dmcqf 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-rnpcc 1/1 Running 0 1h
Name: hello-world
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-world","namespace":"default"},"spec":{"ports":[{"port":6000...
Selector: department=world,greeting=hello
Type: NodePort
IP: 10.59.254.88
Port: <unset> 60000/TCP
TargetPort: 50000/TCP
NodePort: <unset> 30418/TCP
Endpoints: 10.56.2.7:50000,10.56.3.6:50000,10.56.6.4:50000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: hello-kubernetes
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-kubernetes","namespace":"default"},"spec":{"ports":[{"port"...
Selector: department=kubernetes,greeting=hello
Type: NodePort
IP: 10.59.251.189
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32464/TCP
Endpoints: 10.56.2.6:8080,10.56.6.3:8080,10.56.8.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: my-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.9:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* hello-world:60000 (<none>)
/kube hello-kubernetes:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/*"},{"backend":{"serviceName":"hello-kubernetes","servicePort":80},"path":"/kube"}]}}]}}
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 107s loadbalancer-controller default/my-ingress
Warning Sync 66s (x15 over 107s) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
Pulumi Cluster Config
{
"name": "test-cluster",
"region": "europe-west4",
"addonsConfig": {
"httpLoadBalancing": {
"disabled": false
},
"kubernetesDashboard": {
"disabled": false
}
},
"ipAllocationPolicy": {},
"pools": [
{
"name": "default-pool",
"initialNodeCount": 1,
"nodeConfig": {
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"machineType": "n1-standard-1",
"labels": {
"pool": "api-zero"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
},
"autoscaling": {
"minNodeCount": 1,
"maxNodeCount": 20
}
},
{
"name": "outbound",
"initialNodeCount": 2,
"nodeConfig": {
"machineType": "custom-1-1024",
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"labels": {
"pool": "outbound"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
}
}
The author of this post eventually figured out, that issue persist only when cluster is bootstrapped with pulumi.
It looks like you are missing a default backend (L7 - HTTTP LoadBalancer) for your default ingress controller. From what I observed it`s not deployed when you have Istio add-on enabled in your GKE cluster (Istio has its own default ingress/egress gateways).
Please verify if it`s up and running in your cluster:
kubectl get pod -n kube-system | grep l7-default-backend