How to do DNS pointing to aws eks - amazon-web-services

I am running aws eks. I am trying to install the sample Nginx app and point a subdomain to it. I have hooked aws eks to an existing rancher portal. I can able to install my Nginx app using the rancher. It has a service file and ingress file. Here is my Nginx helm chart https://github.com/clayrisser/charts/tree/master/alpha/nginx
When I went through the many docs online i have seen aws eks requires aws load balancer controller which auto creates and load balancer of type we specify through our ingress and service file and we need to Alias point to domian. How can we alias point if our domain is a root domain?
How can we eliminate creating Lb's for each app. Is there a way to create and use only one LB for the whole cluster? and all apps can use this LB?
Is there a way to have IP for the elb instead of generated one?
If there is a better way of doing

How can we eliminate creating Lb's for each app. Is there a way to
create and use only one LB for the whole cluster? and all apps can use
this LB?
Yeah, you can install an ingress controller in your cluster with service type LoadBalancer. This will create a LoadBalancer in your account. Some of the popular ones are Nginx, Traefik, Contour etc.
The ingress resources you create now can use this ingress controller using annotation kubernetes.io/ingress.class. Make sure your app service type is not LoadBalancer as it will create a new LB.
Is there a way to have IP for the elb instead of generated one?
Yeah, some cloud providers (including AWS) allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. Service should look something like this:
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xx
...
But as you're looking for a single LB, you should probably use the loadBalancerIP option with an ingress controller. For eg, nginx ingress controller provides this option. The configuration would look something like:
values:
controller:
service:
loadBalancerIP: "12.234.162.41"
xxx
https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml#L271

Related

Kubernetes deployment deletion logs

I have my application running in EKS cluster.I have exposed the application using Ingress- ALB load balancer controller. ALB load balancer controller has deleted recently, how to find when it got deleted.
If you have configured the ALB-ingress controller driver to dump logs on S3. Its a place to start. This enter link description here guide will be a good start to understand how could it be configured.
Here is a pattern of an annotation for the ALB ingress controller that you could use for searching:
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app

k8s service annotations not working on AWS LB

I am running a cluster in EKS, with k8s 1.21.5
I know by default k8s has a Cloud Controller Manager which can be used to create Load balancers and by default it will create a classic LB in AWS.
I realize CLB are going away and I should use NLB or ALB and rather install the AWS Load Balancer controller instead but I want to work out why my annotations dont work.
What I am trying to do is setup a TLS listen using an ACM certificate because by default its all setup as TCP
Here are my annotations
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:<region>:<account>:certificate/<id>
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
# service.beta.kubernetes.io/aws-load-balancer-ssl-ports: <port>
I have followed the k8s docs here which specify which annotations to use https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
And I have checked in the k8s code that these annotations are present
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L162
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L167
https://github.com/kubernetes/kubernetes/blob/v1.21.5/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L181
When I create my service with these annotations, the service in k8s says pending
Can anyone tell me why it wont work or give me any insight?
What I have been doing is manually configuring the LB after its created, but I want to get away from doing that
#congbaoguier
Thanks for your advice to look at the logs, I was being a complete dummy. After enabling my logging on control plane I was able to see that there was an issue with my ACM ARN and weirdly I have no idea where I got that ARN from, what I check it in ACM it was WRONG DOH
Updating my ARN it now works, so thanks for the push to use my brain again :P

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Enabling CDN to kubernetes backend through backendconfig doesn't allow custom host and path rules

Not able to add custom path rules lo Google CDN Loadbalancer
Despite some minor issues like address flapping between custom ingress controller IP and reserved CDN IP, we are implementing CDN for our GKE hosted app following this tutorial (https://cloud.google.com/kubernetes-engine/docs/how-to/cdn-backendconfig)
Almost everything is fine, but when trying to add some path rules, via k8s manifest or Google loadbalancer UI, they take no effect at all, in fact, in the UI case, the rules disappear after 2 minutes...
Any thoughts?
Try using "kubectl replace" when dealing with ingress manifest. Google Cloud does not allow updates to ingress after it is created. So in Kubernetes it might look like you make changes but they will not get applied in Google Cloud.
Using kubectl describe, in the Events section, I found this warning:
Warning Translate 114s (x32 over 48m) loadbalancer-controller error while evaluating the ingress spec: service "xxx-staging/statics-bucket" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
So, this is the problem, I will try to change this and post here the resolution.

kubernetes on aws: Exposing multiple domain names (ingress vs ELB)

I am experimenting with a kubernetes cluster on aws.
At the end of the day, I want to expose 2 urls:
production.somesite.com
staging.somesite.com
When exposing 1 url, things (at least in the cloud landscape) seem to be easy.
You make the service LoadBalancer type --> aws provisions an ELB --> you assign an A type alias record (e.g. whatever.somesite.com) to ELB's dns name and boom, there is your service publicly available via the hostname you like.
I assume one easy (and I guess not best-pracise-wise) way of going about this is to expose 2 ELBs.
Is Ingress the (good) alternative?
If so, what is the Route53 record I should create?
For what that matters (and in case this may be a dealbreaker for Ingress):
production.somesite.com will be publicly available
staging.somesite.com will have restrictive acces
Ingress is for sure one possible solution.
You need to deploy in your cluster an Ingress controller (for instance https://github.com/kubernetes/ingress-nginx) than expose it with a Service of type LoadBalancer as you did previously.
In route53, you need to point any domain names you want to be served by your ingress controller to ELB's name, exactly as you did previously.
The last thing you need to do is create an Ingress resource for every domain you want your ingress controller to be aware of (more on this here: https://kubernetes.io/docs/concepts/services-networking/ingress/).
That being said, if you plan to only have 2 public URLs in your cluster I'd use 2 ELBs. Ingress controller is another component to be maintained/monitored in your cluster, so take this into account when evaluating the tradeoffs.