Helm prometheus custom loadbalancer configuration on AWS - amazon-web-services

Hello and thank you in advance!
I have the following issue:
I'm trying to install prometheus over AWS EKS using Helm, but want to have an opportunity to configure AWS ELB to be private and available from inside my VPC(by default it's being created as a public LoadBalancer with FQDN).
When I execute following:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations{0}="service.beta.kubernetes.io/aws-load-balancer-internal":"0.0.0.0/0"
It creates a standard LoadBalancer service with no annotations included:
$ kubectl describe service/prometheus-server -n=prometheus
Name: prometheus-server
Namespace: prometheus
Labels: app=prometheus
chart=prometheus-11.7.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Selector: app=prometheus,component=server,release=prometheus
Type: LoadBalancer
IP: 10.100.255.81
I was playing around with quotes and other possible syntax variations but no luck. Please advise on the proper annotation usage.

It's kind of tricky, but you can do it like this:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"="0.0.0.0/0"
You can see the format and limitation of set here in the Helm docs. For example,
--set nodeSelector."kubernetes\.io/role"=master
becomes:
nodeSelector:
kubernetes.io/role: master
✌️

Related

How to set resources config for aws eks alb

I am using helm to install and configure the ALB ingress controller for EKS Fargate, everything works as expected but I want to upgrade the default resources for CPU and Memory, the default values are 0.25vCPU 0.5GB
i am using this command
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=CLUSTERNAME \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set vpcId=vpc-XXXXXXXX \
--set replicaCount=1
how i can set the resources config using --set resources=??
https://artifacthub.io/packages/helm/aws/aws-load-balancer-controller

Cant deploy Ingress object on EKS: failed calling webhook vingress.elbv2.k8s.aws: the server could not find the requested resource

I am following this AWS guide: https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-fargate/ to setup my kubernetes cluster under ALB.
After installing the AWS ALB controller on my EKS cluster, following below steps:
helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=YOUR_CLUSTER_NAME \
--set serviceAccount.create=false \
--set region=YOUR_REGION_CODE \
--set vpcId=<VPC_ID> \
--set serviceAccount.name=aws-load-balancer-controller \
-n kube-system
I want to deploy my ingress configurations:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: 200,302
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
name: staging-ingress
namespace: staging
finalizers:
- ingress.k8s.aws/resources
spec:
rules:
- http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /api/v1/price
Everything looks fine. However, when I run the below command to deploy my ingress:
kubectl apply -f ingress.staging.yaml -n staging
I am having below error:
Error from server (InternalError): error when creating "ingress.staging.yaml": Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": the server could not find the requested resource
There are very few similar issues on Google an none was helping me. Any ideas of what is the problem?
K8s version: 1.18
the security group solved me:
node_security_group_additional_rules = {
ingress_allow_access_from_control_plane = {
type = "ingress"
protocol = "tcp"
from_port = 9443
to_port = 9443
source_cluster_security_group = true
description = "Allow access from control plane to webhook port of AWS load balancer controller"
}
}
I would suggest to take a look at the alb controller logs, the CRDs that you are using are for v1beta1 API group while the latest chart is registering v1 API group webhook aws-load-balancer-controller v2.4.0
If you look at the alb controller startup logs you should see a line similar to the below message
v1beta1
{"level":"info","ts":164178.5920634,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1beta1-ingress"}
v1
{"level":"info","ts":164683.0114837,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1-ingress"}
if that is the case you can fix the problem by using an earlier version of the controller or get the newer version for the CRDs

GCP Can I Expose Kubernetes Cluster ExternalIP without LoadBalancer?

I would like to maintain a very low-cost Kubernetes cluster on GCP. I am using a single node pool of e1-small instances. The monthly cost of this instance is $4.91 which is fine. But the problem is the ingress I am using to expose my node ports to an external-IP. The ingress instance uses a Google load balancer which costs around $18. Therefore, I am mostly paying to a useless load balancer which I really don't need. Is there a way that I can expose the IP addresses of those instances without the load balancer?
If you expose the nodePort externally, you will expose a port number 10k+. So not a port 80 or 443 for a website. You need to proxy the connection, with a loadbalancer by example.
A solution is to use Cloud Run as reverse proxy with NGINX for example. In this case, you can also use serverless VPC Connector and reach the service through the private IP in your VPC.
Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Use Helm to deploy an NGINX ingress controller
helm install $NGINX ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
That's your load balancer.
Now get yourself a certificate manager...
Label the namespace to disable resource validation
kubectl label -n $NAMESPACE cert-manager.io/disable-validation=true
Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager \
--namespace $NAMESPACE \
--set installCRDs=true \
--set nodeSelector."kubernetes\.io/os"=linux \
--set webhook.nodeSelector."kubernetes\.io/os"=linux \
--set cainjector.nodeSelector."kubernetes\.io/os"=linux
Next you'll need to add a CA Cluster Issuer...
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: $EMAIL_ADDRESS
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
Apply CA Cluster Issuer
kubectl apply -f cluster-issuer.yaml
You'll also need an ingress and service yaml.
Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: $INGRESS_NAME
namespace: $NAMESPACE
labels:
app.kubernetes.io/part-of: $NAMESPACE
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- $YOUR_DOMAIN
secretName: tls-secret
rules:
- host: $YOUR_DOMAIN
http:
paths:
- backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Service
apiVersion: v1
kind: Service
metadata:
name: $SERVICE_NAME
namespace: $NAMESPACE
labels:
app.kubernetes.io/part-of: $NAMESPACE
app.kubernetes.io/type: service
spec:
type: ClusterIP
ports:
- name: fart
port: $SERVICE_PORT
targetPort: $SERVICE_PORT
selector:
app.kubernetes.io/name: $DEPLOYMENT_NAME
app.kubernetes.io/part-of: $NAMESPACE
It shouldn't cost anything more to deploy these resources.
If you're so concerned about costs you can deploy a fully functioning and publicly reachable cluster locally with minikube or micro-k8s.

Assign a Static Public IP to Istio ingress-gateway Loadbalancer service

As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?
I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –

How to enable Istio SDS on existing GKE cluster

I have an existing GKE cluster with the Istio addon installed, e.g.:
gcloud beta container clusters create istio-demo \
--addons=Istio --istio-config=auth=MTLS_PERMISSIVE \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
I am following this guide to install cert-manager in order to automatically provision TLS certificates from Let's Encrypt. According to the guide, Istio needs SDS enabled which can be done at the point of installation:
helm install istio.io/istio \
--name istio \
--namespace istio-system \
--set gateways.istio-ingressgateway.sds.enabled=true
As I already have Istio installed via GKE, how can I enable SDS on the existing cluster? Alternatively, is it possible to use the gcloud CLI to enable SDS at the point of cluster creation?
Managed Istio per design will revert any custom configuration and will disable SDS again. So, IMHO, it is a non-useful scenario. You can enable SDS manually following this guide, but keep in mind that the configuration will remain active only for 2-3 minutes.
Currently GKE doesn't support enabling SDS when creating a cluster from scratch. On GKE managed Istio, Google is looking to have the ability to enable SDS on GKE clusters, but they don't have an ETA yet for that release.
However, if you use non-managed (open source) Istio, SDS feature is in the Istio roadmap, and I think it should be available in version 1.2, but it is not a guarantee.
Even though currently the default ingress gateway created by Istio on GKE doesn't support SDS, you can add your own extra ingress gateway manually.
You can get the manifest of the default istio-ingressgateway deployment and service in your istio-system namespace and modify it to add the SDS container and change the name and then apply it to your cluster. But it's a little too tedious, there's a simpler way to do that:
First download the open-source helm chart of istio (choose a version that works with your Istio on GKE version, in my case my Istio on GKE is 1.1.3 and I downloaded open-source istio 1.1.17 and it works):
curl -O https://storage.googleapis.com/istio-release/releases/1.1.17/charts/istio-1.1.17.tgz
# extract under current working directory
tar xzvf istio-1.1.17.tgz
Then render the helm template for only the ingressgateway component:
helm template istio/ --name istio \
--namespace istio-system \
-x charts/gateways/templates/deployment.yaml \
-x charts/gateways/templates/service.yaml \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ingressgateway.sds.enabled=true > istio-ingressgateway.yaml
Then manually modify the rendered istio-ingressgateway.yaml file with following modifications:
Change the metadata.name for both the deployment and service to something else like istio-ingressgateway-sds
Change the metadata.lables.istio for both the deployment and service to something else like ingressgateway-sds
Change the spec.template.metadata.labels for the deployment similarly to ingressgateway-sds
Change the spec.selector.istio for the service to same value like ingressgateway-sds
Then apply the yaml file to your GKE cluster:
kubectl apply -f istio-ingressgateway.yaml
Holla! You have your own istio ingressgatway with SDS created now and you can get the load balancer IP of it by:
kubectl -n istio-system get svc istio-ingressgateway-sds
To let your Gateway to use the correct sds enabled ingressgateway you need to set spec.selector.istio to match the one you set. Below is an example of a Gateway resource using a kubernetes secret as TLS cert:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-test
spec:
selector:
istio: ingressgateway-sds
servers:
- hosts:
- '*.example.com'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*.example.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: example-com-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
Per Carlos' answer, I decided not to use the Istio on GKE addon as there is very limited customization available when using Istio as a managed service.
I created a standard GKE cluster...
gcloud beta container clusters create istio-demo \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
And then manually installed Istio...
Create the namespace:
kubectl create namespace istio-system
Install the Istio CRDs:
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
Install Istio using the default configuration profile with my necessary customizations:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set gateways.enabled=true \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-ingressgateway.externalTrafficPolicy="Local" \
--set global.proxy.accessLogFile="/dev/stdout" \
--set global.proxy.accessLogEncoding="TEXT" \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set prometheus.enabled=true \
--set tracing.enabled=true \
| kubectl apply -f -
Enable Istio sidecar injection on default namespace
kubectl label namespace default istio-injection=enabled