LetsEncrypt not verifying via Kubernetes ingress and loadbalancer in AWS EKS
ClientIssuer
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my#email.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- echo0.site.com
secretName: echo-tls
rules:
- host: echo0.site.com
http:
paths:
- backend:
serviceName: echo0
servicePort: 80
Events
12m Normal IssuerNotReady certificaterequest/echo-tls-3171246787 Referenced issuer does not have a Ready status condition
12m Normal GeneratedKey certificate/echo-tls Generated a new private key
12m Normal Requested certificate/echo-tls Created new CertificateRequest resource "echo-tls-3171246787"
4m29s Warning ErrVerifyACMEAccount clusterissuer/letsencrypt-staging Failed to verify ACME account: context deadline exceeded
4m29s Warning ErrInitIssuer clusterissuer/letsencrypt-staging Error initializing issuer: context deadline exceeded
kubectl describe certificate
Name: echo-tls
Namespace: default
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1alpha3
Kind: Certificate
Metadata:
Creation Timestamp: 2020-04-04T23:57:22Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: echo-ingress
UID: 1018290f-d7bc-4f7c-9590-b8924b61c111
Resource Version: 425968
Self Link: /apis/cert-manager.io/v1alpha3/namespaces/default/certificates/echo-tls
UID: 0775f965-22dc-4053-a6c2-a87b46b3967c
Spec:
Dns Names:
echo0.site.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: echo-tls
Status:
Conditions:
Last Transition Time: 2020-04-04T23:57:22Z
Message: Waiting for CertificateRequest "echo-tls-3171246787" to complete
Reason: InProgress
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal GeneratedKey 18m cert-manager Generated a new private key
Normal Requested 18m cert-manager Created new CertificateRequest resource "echo-tls-3171246787"
Been going at this for a few days now. I have tried with different domains, but end up with same results. Am I missing anything here/steps. It is based off of this tutorial here
Any help would be appreciated.
Usually with golang applications the error context deadline exceeded means the connection timed out. That sounds like the cert-manager pod was not able to reach the ACME API, which can happen if your cluster has an outbound firewalls, and/or does not have a NAT or Internet Gateway attached to the subnets
This might be worthwhile to look at. I was facing similar issue.
Change LoadBalancer in ingress-nginx service.
Add/Change externalTrafficPolicy: Cluster.
Reason being, pod with the certificate-issuer wound up on a different node than the load balancer did, so it couldn’t talk to itself through the ingress.
Below is complete block taken from https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
#CHANGE/ADD THIS
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
Related
I am trying to launch application load balancer(ALB) on AWS EKS. I have already installed Application load balancer controller in my cluster successfully. The tutorial I am following tells me that after creating ingress and applying it, I should see an ALB created in my AWS, which I don't. What could be the reason? Am I missing something?
I have already created and started apple-service and banana-service and their pods too.
Here's the ingress YAML. I can successfully apply this ingress also, but the ALB didn't launch.
I am using EKS k8s version 1.22
kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 19m
coredns 2/2 2 2 38m
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-awesome-app-ingress <none> testingkarlo.ml 80 14m
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-awesome-app-ingress
labels:
app: my-awesome-app
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: testingkarlo.ml
http:
paths:
- path: /apple
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
- path: /banana
pathType: Prefix
backend:
service:
name: banana-service
port:
number: 5678
apple.yaml
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
targetPort: 5678
type: LoadBalancer
banana.yaml is similar as above.
After applying apple.yaml and banana.yaml ,Two classic Load balancers are launched in AWS.
Here is my setup.
I have 2 AWS accounts.
Applications account
Monitoring account
Application account has EKS + Istio + Application related microservices + promtail agents.
Monitoring account has centralized logging system within EKS + Istio + (Grafana & Prometheus & loki pods running)
From Applications account, I want to push logs to Loki on Monitoring a/c. I tried exposing Loki service outside monitoring a/c but I am facing issues to set loki url as https://<DNS_URL>/loki. This change I tried by using suggestions at here and here, but that is not working for me. I have installed the loki-stack from this url
The question is how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
Please note both accounts are using pods within EKS and not standalone loki or promtail.
Thanks and regards.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2021-10-25T14:59:20Z"
labels:
app: loki
app.kubernetes.io/managed-by: Helm
chart: loki-2.5.0
heritage: Helm
release: loki
name: loki
namespace: monitoring
resourceVersion: "18279654"
uid: 7eba14cb-41c9-445d-bedb-4b88647f1ebc
spec:
clusterIP: 172.20.217.122
clusterIPs:
- 172.20.217.122
ports:
- name: metrics
port: 80
protocol: TCP
targetPort: 3100
selector:
app: loki
release: loki
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
generation: 14
name: grafana-vs
namespace: monitoring
resourceVersion: "18256422"
uid: e8969da7-062c-49d6-9152-af8362c08016
spec:
gateways:
- my-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /grafana/
name: grafana-ui
rewrite:
uri: /
route:
- destination:
host: prometheus-operator-grafana.monitoring.svc.cluster.local
port:
number: 80
- match:
- uri:
prefix: /loki
name: loki-ui
rewrite:
uri: /loki
route:
- destination:
host: loki.monitoring.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"my-gateway","namespace":"monitoring"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*"],"port":{"name":"http","number":80,"protocol":"HTTP"}}]}}
creationTimestamp: "2021-10-18T12:28:05Z"
generation: 1
name: my-gateway
namespace: monitoring
resourceVersion: "16618724"
uid: 9b254a22-958c-4cc4-b426-4e7447c03b87
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internal","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"ingress-alb","namespace":"istio-system"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"istio-ingressgateway","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
generation: 1
name: ingress-alb
namespace: istio-system
resourceVersion: "4447931"
uid: 74b31fba-0f03-41c6-a63f-6a10dee8780c
spec:
rules:
- http:
paths:
- backend:
service:
name: istio-ingressgateway
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: internal-k8s-istiosys-ingressa-25a256ef4d-1368971909.us-east-1.elb.amazonaws.com
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The ingress is associated with AWS ALB.
I want to access Loki from ALB URL like http(s)://my-alb-url/loki
I hope I have provided the required details now.
Let me know. Thanks.
...how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
You didn't describe what issue when you use external LB above which should work, anyway, since this method will go thru Internet, the security risk is higher with egress cost consider the volume of logging. You can use Privatelink in this case, see page 16 Shared Services. Your promtail will use the ENI DNS name as the loki target.
I want to create a Internal Ingress for my GKE workloads. I want to know what is the annotation that I can use so that I set a static INTERNAL IP address/name in my ingress.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-https
namespace: istio-system
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.class: "gce-internal"
ingress.gcp.kubernetes.io/pre-shared-cert: my-cert
helm.sh/chart: {{ include "devtools.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
backend:
serviceName: istio-ingressgateway-backend
servicePort: 443
I understand that It will create a Ingress with Internal IP , BUt I want to set a static IP that I have already created in a region/subnet. Is it possible to do so, If yes is there any annotation for the same
EDIT
Now you can create an Ingress resource with Internal IP with GKE by following this documentation:
Cloud.google.com: Kubernetes Engine: Docs: How to: Internal load balance ingress
Leaving the below part for an nginx-ingress solution with Service of type LoadBalancer that has an internal IP address.
There is a workaround for it which entails using the nginx-ingress controller with internal LoadBalancer service.
Please take a look on official documentation:
Cloud.google.com: Kuberentes Engine: Internal Load Balancing - documentation used for workaround
Kubernetes.github.io: Ingress-nginx: Deploy - documentation used for workaround
Below I included an example of the workaround with explanation of taken steps.
Explanation:
It's possible to create an internal LoadBalancer with static IP
Nginx-ingress is using LoadBalancer type of service as an entrypoint
You can create an nginx-ingress with internal LoadBalancer as told in above bullet points
Steps:
Download and modify nginx-ingress definition
Run and check if nginx-ingress-controller service has desired static IP address
Deploy example app and test
Download and modify nginx-ingress definition
By default nginx-ingress definition from official site will have configured service of type LoadBalancer as an entrypoint. By default it will get an external IP address. You can modify/edit service definition to get an internal one.
Please download this YAML and edit the part responsible for service definition below:
A tip!
nginx-ingress is also available to deploy with Helm!.
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations: # ADD THIS LINE
cloud.google.com/load-balancer-type: "Internal" # ADD THIS LINE
labels:
helm.sh/chart: ingress-nginx-2.4.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.33.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.99 # ADD THIS LINE
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
Please take a specific look on part in metadata section:
annotations: # ADD THIS LINE
cloud.google.com/load-balancer-type: "Internal" # ADD THIS LINE
as this part will instruct GCP to provision an internal IP address
Also please take a look on:
loadBalancerIP: 10.156.0.99 # ADD THIS LINE
as this line will tell GCP to allocate the IP address provided.
Please have in mind that this address should be compatible with the VPC Network that you created your cluster in.
Run and check if nginx-ingress-controller service has desired static IP address
After applying whole definition of nginx-ingress you should be able to run the:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Output of above command:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.60.6.97 10.156.0.99 80:31359/TCP,443:32413/TCP 2m59s
As you can see the EXTERNAL-IP is in fact internal and set to 10.156.0.99.
You should be able to curl this address and get the default-backend of nginx-ingress-controller.
Deploy example app and test
This steps are optional and are only showing the process of exposing example app with mentioned nginx-ingress.
YAML definition of Deployment, Service and Ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
ports:
- name: hello-port
port: 80
targetPort: 8080
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
After applying this resources you should be able to:
$ curl 10.156.0.99
and be greeted with:
Hello, world!
Version: 2.0.0
Hostname: hello-app-7f46745f74-27gzh
You can use the annotation
kubernetes.io/ingress.regional-static-ip-name: <STATIC_IP_NAME>
https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress#static_ip_addressing
Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway
Updated
So, I followed the AWS docs on how to setup an EKS cluster with Fargate using the eksctl tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:
NAME HOSTS ADDRESS PORTS AGE
testapp-ingress * 80 129m
So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested here and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck.
This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps
This is the alb article I've followed: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my old configs, new config below:
deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp-deployment"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
name: "testapp-service"
namespace: "testapp-qa"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: testapp-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp-service"
servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
Here are some of the logs from the ingress controller>>
E0316 22:32:39.776535 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
Per the suggestion in the comments from #Michael Hausenblas, I've added an annotation to my service for the alb ingress.
Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's /healthcheck. The logs:
E0317 16:00:45.643937 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939 1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674 1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{ Field: "path-pattern", Values: ["/*"] }]
Update
I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
name: "testapp"
namespace: "testapp-qa"
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 80
targetPort: 9898
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
labels:
app: testapp
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp"
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
In your service, try adding the following annotation:
annotations:
alb.ingress.kubernetes.io/target-type: ip
And also you'd need to explicitly tell the Ingress resource via the alb.ingress.kubernetes.io/healthcheck-path annotation where/how to perform the health checks for the target group. See the ALB Ingress controller docs for the annotation semantics.