GCP Can I Expose Kubernetes Cluster ExternalIP without LoadBalancer? - google-cloud-platform

I would like to maintain a very low-cost Kubernetes cluster on GCP. I am using a single node pool of e1-small instances. The monthly cost of this instance is $4.91 which is fine. But the problem is the ingress I am using to expose my node ports to an external-IP. The ingress instance uses a Google load balancer which costs around $18. Therefore, I am mostly paying to a useless load balancer which I really don't need. Is there a way that I can expose the IP addresses of those instances without the load balancer?

If you expose the nodePort externally, you will expose a port number 10k+. So not a port 80 or 443 for a website. You need to proxy the connection, with a loadbalancer by example.
A solution is to use Cloud Run as reverse proxy with NGINX for example. In this case, you can also use serverless VPC Connector and reach the service through the private IP in your VPC.

Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Use Helm to deploy an NGINX ingress controller
helm install $NGINX ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
That's your load balancer.
Now get yourself a certificate manager...
Label the namespace to disable resource validation
kubectl label -n $NAMESPACE cert-manager.io/disable-validation=true
Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace $NAMESPACE \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set webhook.nodeSelector."kubernetes\.io/os"=linux \
--set cainjector.nodeSelector."kubernetes\.io/os"=linux
Next you'll need to add a CA Cluster Issuer...
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $EMAIL_ADDRESS
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
Apply CA Cluster Issuer
kubectl apply -f cluster-issuer.yaml
You'll also need an ingress and service yaml.
Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: $INGRESS_NAME
namespace: $NAMESPACE
labels:
app.kubernetes.io/part-of: $NAMESPACE
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- $YOUR_DOMAIN
secretName: tls-secret
rules:
- host: $YOUR_DOMAIN
http:
paths:
- backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Service
apiVersion: v1
kind: Service
metadata:
name: $SERVICE_NAME
namespace: $NAMESPACE
labels:
app.kubernetes.io/part-of: $NAMESPACE
app.kubernetes.io/type: service
spec:
type: ClusterIP
ports:
- name: fart
port: $SERVICE_PORT
targetPort: $SERVICE_PORT
selector:
app.kubernetes.io/name: $DEPLOYMENT_NAME
app.kubernetes.io/part-of: $NAMESPACE
It shouldn't cost anything more to deploy these resources.
If you're so concerned about costs you can deploy a fully functioning and publicly reachable cluster locally with minikube or micro-k8s.

Related

EKS Service deployment not updating?

When I try to apply a new Service deployment yaml in AWS EKS, it does not delete the old load balancer from the previous build/deploy
apiVersion: v1
kind: Service
metadata:
# The name must be equal to KubernetesConnectionConfiguration.serviceName
name: ignite-service
# The name must be equal to KubernetesConnectionConfiguration.namespace
namespace: ignite
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
targetPort: 8080
- name: thinclients
port: 10800
targetPort: 10800
# Optional - remove 'sessionAffinity' property if the cluster
# and applications are deployed within Kubernetes
# sessionAffinity: ClientIP
selector:
# Must be equal to the label set for pods.
app: ignite
status:
loadBalancer: {}
I had a situation where I deployed a ELB, but this time a NLB, but it would not destroy the previous ELB.
Is there a way to when applying the k8s manifest, the old load balancer on AWS gets deleted?
Not sure what you are using to create the resources helm, resources or kubectl however consider the kubectl as of now
you can use the
kubectl replace --force -f <service-filename>.yaml

How to access ArgoCD server pod running on EKS?

I am creating an EKS cluster using terraform and then I am deploying ArgoCD pods on it via helm charts. Now, I want to access my ArgoCD server UI in my browser but am unable to access it. My EKS is in a private subnet and I am accessing it using VPN.
If anyone knows the process to access my ArgoCD in my browser then please reply.
Thanks
You could create a ingress resource object with backend of it being your srgocd-server service, it would create a load balancer and you can access your UI using hostname. You need to have a alb-ingress-controller to create this Ingress resource and to provision a alb. check these out:-
aws-alb-controller
argocd-k8s-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: <certificate arn>
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
generation: 1
name: ingress-name
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
rules:
- host: hostname
http:
paths:
- backend:
service:
name: argocd-server
port:
number: 80
path: /*
pathType: ImplementationSpecific

How to use Ingress Nginx Controller to route traffic to private pods Internally

Problem: I am currently using ingress-nginx in my EKS cluster to route traffic to services that need public access.
My use case: I have services I want to deploy in the same cluster but don't want them to have public access. I only want the pods to communicate will all other services within the cluster. Those pods are meant to be private because they're backend services and only need pod-to-pod communication. How do I modify my ingress resource for this purpose?
Cluster Architecture: All services are in the private subnets of the cluster while the load-balancer is in the public subnets
Additional note: I am using external-dns to dynamically create the subdomains for the hosted zones. The hosted zone is public
Thanks
Below are my service.yml and ingress.yml for public services. I want to modify these files for private services
service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.dev.com
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: myapp
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
labels:
app: myapp
spec:
tls:
- hosts:
- myapp.dev.com
secretName: myapp-staging
rules:
- host: myapp.dev.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 'myapp'
port:
number: 80
From this what you have the Ingress already should work and your services are meant to be private(if you set like this in your public cloud cluster), except the Ingress itself. You can update the ConfigMap to use the PROXY protocol so that you can pass proxy information to the Ingress Controller:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
And then: kubectl apply -f common/nginx-config.yaml
Now you can deploy any app that you want to have private with the name specified (for example your myapp Service in your yaml file provided.
If you are a new to Kubernetes Networking, then this article would be useful for you or in official Kubernetes documentation
Here you can find other ELB annotations that may be useful for you

Create Istio Ingress-gateway POD without creating istiod

I am bit new to istio and still learning. I have a use-case in which Istio is already deployed in istio-system namespace but I need to deploy istio ingress-gateway Pod in test-ns namespace using istioOperator. I am using istio 1.6.7.
From Istio docs, its mentioned to run this cmd:
istioctl manifest apply --set profile=default --filename=istio-ingress-values.yaml but this will create istiod Pods in istio-system which i donot want since its already created.
So, I ran below cmds to just create Ingress Gateway POD but can;t see any Pods or services created in test-ns. Kindly help if this is possible
kubectl apply -f istio-ingress-values.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: test-ns
name: testoperator
ingressGateways:
- enabled: true
name: istio-ingressgateway
namespace: test-ns
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
resources: {}
service:
ports:
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
In Istio it is possible to tune configuration profiles.
As I can see, you are using the default profile, so I will describe how you can tune this configuration profile to create istio-ingressgateway in the test-ns namespace.
We can display the default profile settings by running the istioctl profile dump default command.
First, I saved these default settings in the default_profile_dump.yml file:
# istioctl profile dump default > default_profile_dump.yml
And then I modified this file:
NOTE: I only added one line: namespace: test-ns.
...
ingressGateways:
- enabled: true
name: istio-ingressgateway
namespace: test-ns
...
After modifying default settings of the ingressGateways, I applied these new settings:
# istioctl manifest apply -f default_profile_dump.yml
This will install the Istio 1.9.1 default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
- Pruning removed resources Removed HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
Removed PodDisruptionBudget:istio-system:istio-ingressgateway.
Removed Deployment:istio-system:istio-ingressgateway.
Removed Service:istio-system:istio-ingressgateway.
Removed ServiceAccount:istio-system:istio-ingressgateway-service-account.
Removed RoleBinding:istio-system:istio-ingressgateway-sds.
Removed Role:istio-system:istio-ingressgateway-sds.
✔ Installation complete
Finally, we can check where istio-ingressgateway was deployed:
# kubectl get pod -A | grep ingressgateway
test-ns istio-ingressgateway-7fc7c7c-r92tw 1/1 Running 0 33s
The istiod Deployment remained intact in the istio-system namespace:
# kubectl get deploy,pods -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/istiod 1/1 1 1 51m
NAME READY STATUS RESTARTS AGE
pod/istiod-64675984c5-xl97n 1/1 Running 0 51m

Enabling SSL on GKE endpoints not working correctly

I created API on GKE using cloud endpoints. It is working fine without Https You can try it here API without Https
I followed the instructions which mention here Enabling SSL for cloud endpoint after setup everything which is mention in this page I'm able to access my endpoints with Https but with a warning.
Your connection is not private - Back to Safety (Chrome)
Check it here API with Https
Can you please let me know what I'm missing
Update
I'm using Google-managed SSL certificates for cloud endpoints in GKE.
I followed the steps which are mention in this doc but not able to successfully add SSL Certificate.
When I go in my cloud console I see
Some backend services are in UNKNOWN state
Here are my development yaml's
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: quran-grpc
spec:
ports:
- port: 81
targetPort: 9000
protocol: TCP
name: rpc
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: quran-grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: quran-grpc
spec:
replicas: 1
selector:
matchLabels:
app: quran-grpc
template:
metadata:
labels:
app: quran-grpc
spec:
volumes:
- name: nginx-ssl
secret:
secretName: nginx-ssl
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8080",
"--ssl_port=443",
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=quran.endpoints.utopian-button-227405.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- containerPort: 8080
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: nginx-ssl
readOnly: true
- name: python-grpc-quran
image: gcr.io/utopian-button-227405/python-grpc-quran:5.0
ports:
- containerPort: 50051
ssl-cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: quran-ssl
spec:
domains:
- quran.endpoints.utopian-button-227405.cloud.goog
---
apiVersion: v1
kind: Service
metadata:
name: quran-ingress-svc
spec:
selector:
name: quran-ingress-svc
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: quran-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 34.71.56.199
networking.gke.io/managed-certificates: quran-ssl
spec:
backend:
serviceName: quran-ingress-svc
servicePort: 80
Can you please let me know what I'm doing wrong.
Your SSL configuration is working fine, and the reason you are receiving this error is because you are using a self-signed certificate.
A self-signed certificate is a certificate that is not signed by a certificate authority (CA). These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a CA aim to provide. For instance, when a website owner uses a self-signed certificate to provide HTTPS services, people who visit that website will see a warning in their browser.
To solve this issue you should buy a valid certificate from a trusted CA, or use Let's Encrypt that will give a certificated valid for 90 days, and after this period you can renew this certificate.
If you decide to buy a SSL certificate, you can follow the document you describe to create a Kubernetes secret and use in your ingress, simple as that.
But if you don't want to buy a certificate, you could install cert-manager in your cluster, it will help you to generate valid certificates using Let's Encrypt.
Here is an example of how to use cert-manager + Let's Encrypt solution to generate valid SSL certificates:
Using cert-manager with Let's Encrypt
cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This makes it possible to provide 'certificates as a service' to developers working within your Kubernetes cluster.
Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge. The certificate is valid for 90 days, during which renewal can take place at any time.
I'm supossing you already have NGINX ingress installed and working.
Pre-requisites:
- NGINX Ingress installed and working
- HELM 3.0 installed and working
cert-manager install
Note: When running on GKE (Google Kubernetes Engine), you may encounter a ‘permission denied’ error when creating some of these resources. This is a nuance of the way GKE handles RBAC and IAM permissions, and as such you should ‘elevate’ your own privileges to that of a ‘cluster-admin’ before running the above command. If you have already run the above command, you should run them again after elevating your permissions:
Follow the official docs to install, or just use HELM 3.0 with the followe command:
$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager-legacy.crds.yaml
Create CLusterIssuer for Let's Encrypt: Save the content below in a new file called letsencrypt-production.yaml:
Note: Replace <EMAIL-ADDRESS> with your valid email.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
labels:
name: letsencrypt-prod
name: letsencrypt-prod
spec:
acme:
email: <EMAIL-ADDRESS>
http01: {}
privateKeySecretRef:
name: letsencrypt-prod
server: 'https://acme-v02.api.letsencrypt.org/directory'
Apply the configuration with:
kubectl apply -f letsencrypt-production.yaml
Install cert-manager with Let's Encrypt as a default CA:
helm install cert-manager \
--namespace cert-manager --version v0.8.1 jetstack/cert-manager \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer
Verify the installation:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
Using cert-manager
Apply this annotation in you ingress spec:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
After apply cert-manager will generate the tls certificate fot the domain configured in Host: like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
rules:
- host: myapp.domain.com
http:
paths:
- path: "/"
backend:
serviceName: my-app
servicePort: 80