I am creating an EKS cluster using terraform and then I am deploying ArgoCD pods on it via helm charts. Now, I want to access my ArgoCD server UI in my browser but am unable to access it. My EKS is in a private subnet and I am accessing it using VPN.
If anyone knows the process to access my ArgoCD in my browser then please reply.
Thanks
You could create a ingress resource object with backend of it being your srgocd-server service, it would create a load balancer and you can access your UI using hostname. You need to have a alb-ingress-controller to create this Ingress resource and to provision a alb. check these out:-
aws-alb-controller
argocd-k8s-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: <certificate arn>
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
generation: 1
name: ingress-name
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
rules:
- host: hostname
http:
paths:
- backend:
service:
name: argocd-server
port:
number: 80
path: /*
pathType: ImplementationSpecific
Related
I am trying to migrate my CLB to ALB. I know there is a direct option on the AWS loadbalancer UI console to do a migration. But I don't want to use that. I have a service file which deploys classic loadbalancer on EKS using kubectl.
apiVersion: v1
kind: Service
metadata:
annotations: {service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600',
service.beta.kubernetes.io/aws-load-balancer-type: classic}
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: LoadBalancer
I want to convert it into ALB. I tried the following approach but not worked.
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=app**
spec:
rules:
- host: "*.amazonaws.com"
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: helloworld
port:
number: 8080
It has not created any loadbalancer. When I did kubectl get ingress, It showed me the ingress but it has no address. What am I doing wrong here?
your Ingress file seems to be correct,
for having ALB installed automatically from an Ingress you should install the AWS Load Balancer Controller which manages AWS Elastic Load Balancers for a Kubernetes cluster.
You can follow this and then verify that it is installed correctly:
kubectl get deployment -n kube-system aws-load-balancer-controller
apply:
kubectl apply -f service-ingress.yaml
and verify that your ALB, TG, etc are created:
kubectl logs deploy/aws-load-balancer-controller -n kube-system --follow
I want to deploy Application Load Balancer for Traefik running in Kubernetes. So, I tried the following:
kind: Service
apiVersion: v1
metadata:
name: traefik-application-elb
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
service.beta.kubernetes.io/aws-load-balancer-name: "eks-traefik-application-elb"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "App=traefik,Env=dev"
spec:
type: LoadBalancer
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
But internet-facing Classic load balancer was created.
Also tried service.beta.kubernetes.io/aws-load-balancer-type: "alb" but nothing changed.
What is wrong here ?
From the docs: https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html
When you create a Kubernetes Service of type LoadBalancer, the AWS cloud provider load balancer controller creates AWS Classic Load Balancers by default, but can also create AWS Network Load Balancers.
ALB is not supported as a k8s load balancer. You can specify an NLB if desired.
However you can use ALB as an ingress controller with a deployment like this - https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: SearchApp
annotations:
# share a single ALB with all ingress rules with search-app-ingress
alb.ingress.kubernetes.io/group.name: search-app-ingress
spec:
defaultBackend:
service:
name: search-svc # route traffic to the service search-svc
port:
number: 80
Problem: I am currently using ingress-nginx in my EKS cluster to route traffic to services that need public access.
My use case: I have services I want to deploy in the same cluster but don't want them to have public access. I only want the pods to communicate will all other services within the cluster. Those pods are meant to be private because they're backend services and only need pod-to-pod communication. How do I modify my ingress resource for this purpose?
Cluster Architecture: All services are in the private subnets of the cluster while the load-balancer is in the public subnets
Additional note: I am using external-dns to dynamically create the subdomains for the hosted zones. The hosted zone is public
Thanks
Below are my service.yml and ingress.yml for public services. I want to modify these files for private services
service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.dev.com
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: myapp
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
labels:
app: myapp
spec:
tls:
- hosts:
- myapp.dev.com
secretName: myapp-staging
rules:
- host: myapp.dev.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 'myapp'
port:
number: 80
From this what you have the Ingress already should work and your services are meant to be private(if you set like this in your public cloud cluster), except the Ingress itself. You can update the ConfigMap to use the PROXY protocol so that you can pass proxy information to the Ingress Controller:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
And then: kubectl apply -f common/nginx-config.yaml
Now you can deploy any app that you want to have private with the name specified (for example your myapp Service in your yaml file provided.
If you are a new to Kubernetes Networking, then this article would be useful for you or in official Kubernetes documentation
Here you can find other ELB annotations that may be useful for you
I'm trying to make sense of an example Kubernetes YAML config file that I am trying to customize:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-web-server
namespace: myapp
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/security-groups: my-sec-group
app.kubernetes.io/name: my-alb-ingress-web-server
app.kubernetes.io/component: my-alb-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-web-server
servicePort: 8080
The documentation for this example claims its for creating an "Ingress", or a K8s object that manages inbound traffic to a service or pod.
This particular Ingress resource appears to use AWS ALB (Application Load Balancers) and I need to adapt it to create and Ingress resource in GCP/GKE.
I'm Googling the Kubernetes documentation high and low and although I found the kubernetes.io/ingress.class docs I don't see where they define "alb" as a valid value for this property. I'm asking because I obviously need to find the correct kubernetes.io/ingress.class value for GCP/GKE and I assume if I can find the K8s/AWS Ingress documentation I should be able to find the K8s/GCP Ingress documentation.
I'm assuming K8s has AWS, GCP, Azure, etc. built-in client to kubectl for connecting to these clouds/providers?
So I ask: how does the above configuration tell K8s that we are creating an AWS Ingress (as opposed to an Azure Ingress, GCP Ingress, etc.) and where is the documentation for this?
The documentation you're looking for is :
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
An example of an ingress resource :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-front-api
namespace: example
annotations:
networking.gke.io/managed-certificates: "front.example.com, api.example.com"
kubernetes.io/ingress.global-static-ip-name: "prod-ingress-static-ip"
spec:
rules:
- host: front.example.com
http:
paths:
- backend:
service:
name: front
port:
number: 80
path: /*
pathType: ImplementationSpecific
- host: api.example.com
http:
paths:
- backend:
service:
name: api
port:
number: 80
path: /*
pathType: ImplementationSpecific
I am new at kubernetes so apologies in advance for any silly questions and mistakes. I am trying to setup external access through ingress for ArgoCD. My setup is an aws eks cluster. I have setup alb following the guide here. I have also setup external dns service as described here. I also followed the verification steps in that guide and was able to confirm that the dns record got created as well and i was able to access the foo service.
For argoCD I installed the manifests via
kubectl create namespace argocd
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -n argocd
The argoCD docs mention adding a service to split up http and grpc and an ingress setup here. I followed that and installed those as well
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
external-dns.alpha.kubernetes.io/hostname: argocd.<mydomain.com>
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: ClusterIP
apiVersion: networking.k8s.io/v1 # Use extensions/v1beta1 for Kubernetes 1.18 and older
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.<mydomain.com>
http:
paths:
- backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
- backend:
service:
name: argocd-server
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.<mydomain.com>
The definitions are applied successfully but I don't see the dns record created neither any external IP listed. Am I missing any steps or is there any misconfiguration here? Thanks in advance!
Service type needs to be NodePort.