Assign a Static Public IP to Istio ingress-gateway Loadbalancer service - istio

As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?

I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –

Related

Helm prometheus custom loadbalancer configuration on AWS

Hello and thank you in advance!
I have the following issue:
I'm trying to install prometheus over AWS EKS using Helm, but want to have an opportunity to configure AWS ELB to be private and available from inside my VPC(by default it's being created as a public LoadBalancer with FQDN).
When I execute following:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations{0}="service.beta.kubernetes.io/aws-load-balancer-internal":"0.0.0.0/0"
It creates a standard LoadBalancer service with no annotations included:
$ kubectl describe service/prometheus-server -n=prometheus
Name: prometheus-server
Namespace: prometheus
Labels: app=prometheus
chart=prometheus-11.7.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Selector: app=prometheus,component=server,release=prometheus
Type: LoadBalancer
IP: 10.100.255.81
I was playing around with quotes and other possible syntax variations but no luck. Please advise on the proper annotation usage.
It's kind of tricky, but you can do it like this:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"="0.0.0.0/0"
You can see the format and limitation of set here in the Helm docs. For example,
--set nodeSelector."kubernetes\.io/role"=master
becomes:
nodeSelector:
kubernetes.io/role: master
✌️

ALB Ingress Controller on AWS

I'm trying to setup an ALB Ingress Controller on AWS-EKS, exactly as the following tutorial describe: ingress_controller_alb, but I cannot get an ingress address.
Indeed, if I run the following command: kubectl get ingress/2048-ingress -n 2048-game, after 10 minutes I get no address. Any idea?
Problem may be in version of aws-controller you are using - you are using old version of ingress controller - 1.0.0, new one is 1.1.3.
I advice you to take look at this documentation: ingress-controller-alb.
1. Download sample ALB ingress controller manifest
wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/alb-ingress-controller.yaml
2. Configure the ALB ingress controller manifest
At minimum, edit the following variables:
--cluster-name=devCluster: name of the cluster. AWS resources will be tagged with kubernetes.io/cluster/devCluster:owned
If ec2metadata is unavailable from the controller pod, edit the following variables:
--aws-vpc-id=vpc-xxxxxx: vpc ID of the cluster.
--aws-region=us-west-1: AWS region of the cluster.
3. Deploy the RBAC roles manifest
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/rbac-role.yaml
4. Deploy the ALB ingress controller manifest
kubectl apply -f alb-ingress-controller.yaml
5. Verify the deployment was successful and the controller started
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
You should be able to display output similar to the following:
-------------------------------------------------------------------------------
AWS ALB Ingress controller
Release: 1.0.0
Build: git-7bc1850b
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------
Then you can deploy sample application
Execute following commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-service.yaml
Deploy an Ingress resource for the 2048 game:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-ingress.yaml
After few seconds, verify that the Ingress resource is enabled:
kubectl get ingress/2048-ingress -n 2048-game
I was struggling with the same issue, but finally got it working after following #MaggieO steps above. A couple of things to consider:
Add public and private subnets to your EKS cluster. Make sure your public subnets are tagged with "kubernetes.io/role/elb":"1". If creating a managed node group, only select private subnets for placement of your worker nodes.
Make sure your IAM role for you worker nodes has the policies AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy, and the custom policy defined here https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/iam-policy.json.
Examine your ingress controller logs, they are helpful.
kubectl logs -n kube-system [name of your ingress controller]
Thank you for your replies!
I think the problem is the cluster creation that results in cluster creation without EC2 instances, with the command eksctl cluster create -f cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test
region: eu-central-1
version: "1.14"
vpc:
id: vpc-50b17738
subnets:
private:
eu-central-1a: { id: subnet-aee763c6 }
eu-central-1b: { id: subnet-bc2ee6c6 }
eu-central-1c: { id: subnet-24734d6e }
nodeGroups:
- name: ng-1-workers
labels: { role: workers }
instanceType: t3.medium
desiredCapacity: 2
volumeSize: 5
privateNetworking: true
I try with node groups and with managed node groups, but I get the following timeout error:
...
[ℹ] nodegroup "ng-1-workers" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng-1-workers"
Error: timed out (after 25m0s) waiting for at least 2 nodes to join the cluster and become ready in "ng-1-workers"
if you succeed to create contoller,you will find this controller:
$ kubectl get po -n kube-system | grep alb
alb-ingress-controller-669b958f64-p69fw 1/1 Running 0 3m7s
and its logs :
$ kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o alb-ingress[a-zA-Z0-9-]+)
-------------------------------------------------------------------------------
AWS ALB Ingress controller
Release: v1.1.8
Build: git-ec387ad1
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------
W0720 13:31:21.242868 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.

JupyterHub proxy-public svc has no external IP (stuck in <pending>)

I am using helm to deploy JupyterHub (version 0.8.2) to kubernetes (AWS managed kubernetes "EKS"). I have a helm config to describe the proxy-public service, with an AWS elastic load balancer:
proxy:
secretToken: ""
https:
enabled: true
type: offload
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '1801'
Problem: When I deploy JupyterHub to EKS via helm:
helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.8.2 --values config.yaml
The proxy-public svc never get's an external IP. It is stuck in pending state:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 172.20.241.23 <none> 8081/TCP 15m
proxy-api ClusterIP 172.20.170.189 <none> 8001/TCP 15m
proxy-public LoadBalancer 172.20.72.196 <pending> 80:31958/TCP,443:30470/TCP 15m
I did kubectl describe svc proxy-public and kubectl get events and there does not appear to be anything out of the ordinary. No errors.
The problem turned out to be the fact that I had mistakenly put the kubernetes cluster (and control plane) in private subnets only, thus making it impossible for the ELB to get an external IP.
You will need another annotation like this in order to use AWS classic loadbalancer.
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Deploying JupyterHub on kubernetes can be an overkill sometimes if all you want is Jupyterhub which is accessible over the internet for you or your team. Instead of doing the complicated kubernetes setup, you can setup a VM in AWS or any other cloud and have jupyterhub installed and run as a service .
In fact, there is already a VM setup available on AWS, GCP and Azure which can be used to spinup your jupyterhub vm that will be accessible on a public ip and support single or multiuser sessions in just few clicks. Details are below if you want to try it out:
Setup on GCP
Setup on AWS
Setup on Azure

How to enable Istio SDS on existing GKE cluster

I have an existing GKE cluster with the Istio addon installed, e.g.:
gcloud beta container clusters create istio-demo \
--addons=Istio --istio-config=auth=MTLS_PERMISSIVE \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
I am following this guide to install cert-manager in order to automatically provision TLS certificates from Let's Encrypt. According to the guide, Istio needs SDS enabled which can be done at the point of installation:
helm install istio.io/istio \
--name istio \
--namespace istio-system \
--set gateways.istio-ingressgateway.sds.enabled=true
As I already have Istio installed via GKE, how can I enable SDS on the existing cluster? Alternatively, is it possible to use the gcloud CLI to enable SDS at the point of cluster creation?
Managed Istio per design will revert any custom configuration and will disable SDS again. So, IMHO, it is a non-useful scenario. You can enable SDS manually following this guide, but keep in mind that the configuration will remain active only for 2-3 minutes.
Currently GKE doesn't support enabling SDS when creating a cluster from scratch. On GKE managed Istio, Google is looking to have the ability to enable SDS on GKE clusters, but they don't have an ETA yet for that release.
However, if you use non-managed (open source) Istio, SDS feature is in the Istio roadmap, and I think it should be available in version 1.2, but it is not a guarantee.
Even though currently the default ingress gateway created by Istio on GKE doesn't support SDS, you can add your own extra ingress gateway manually.
You can get the manifest of the default istio-ingressgateway deployment and service in your istio-system namespace and modify it to add the SDS container and change the name and then apply it to your cluster. But it's a little too tedious, there's a simpler way to do that:
First download the open-source helm chart of istio (choose a version that works with your Istio on GKE version, in my case my Istio on GKE is 1.1.3 and I downloaded open-source istio 1.1.17 and it works):
curl -O https://storage.googleapis.com/istio-release/releases/1.1.17/charts/istio-1.1.17.tgz
# extract under current working directory
tar xzvf istio-1.1.17.tgz
Then render the helm template for only the ingressgateway component:
helm template istio/ --name istio \
--namespace istio-system \
-x charts/gateways/templates/deployment.yaml \
-x charts/gateways/templates/service.yaml \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ingressgateway.sds.enabled=true > istio-ingressgateway.yaml
Then manually modify the rendered istio-ingressgateway.yaml file with following modifications:
Change the metadata.name for both the deployment and service to something else like istio-ingressgateway-sds
Change the metadata.lables.istio for both the deployment and service to something else like ingressgateway-sds
Change the spec.template.metadata.labels for the deployment similarly to ingressgateway-sds
Change the spec.selector.istio for the service to same value like ingressgateway-sds
Then apply the yaml file to your GKE cluster:
kubectl apply -f istio-ingressgateway.yaml
Holla! You have your own istio ingressgatway with SDS created now and you can get the load balancer IP of it by:
kubectl -n istio-system get svc istio-ingressgateway-sds
To let your Gateway to use the correct sds enabled ingressgateway you need to set spec.selector.istio to match the one you set. Below is an example of a Gateway resource using a kubernetes secret as TLS cert:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-test
spec:
selector:
istio: ingressgateway-sds
servers:
- hosts:
- '*.example.com'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*.example.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: example-com-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
Per Carlos' answer, I decided not to use the Istio on GKE addon as there is very limited customization available when using Istio as a managed service.
I created a standard GKE cluster...
gcloud beta container clusters create istio-demo \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
And then manually installed Istio...
Create the namespace:
kubectl create namespace istio-system
Install the Istio CRDs:
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
Install Istio using the default configuration profile with my necessary customizations:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set gateways.enabled=true \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-ingressgateway.externalTrafficPolicy="Local" \
--set global.proxy.accessLogFile="/dev/stdout" \
--set global.proxy.accessLogEncoding="TEXT" \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set prometheus.enabled=true \
--set tracing.enabled=true \
| kubectl apply -f -
Enable Istio sidecar injection on default namespace
kubectl label namespace default istio-injection=enabled

skydns not able to find nginxsvc

I am following the example in here: http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables. Although the dns seems to be enabled:
skwok-wpc-3:1.0 skwok$ kubectl get services kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
and the service is up
$ kubectl get svc
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
nginxsvc app=nginx app=nginx 10.0.128.194 80/TCP
Following the example, I cannot use the curlpod to look up the service:
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.us-west-2.compute.internal
nslookup: can't resolve 'nginxsvc'
Did I miss anything? I am using aws and I use export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start my cluster. Thank you.
Please see: http://kubernetes.io/v1.0/docs/user-guide/debugging-services.html, and make sure nginx is running and serving within your pod. I would also suggest something like:
$ kubectl get ep nginxsvc
$ kubectl exec -it curlpod /bin/sh
pod$ curl ip-from-kubectl-get-ep
pod$ traceroute ip-from-kubectl-get-ep
If that doesn't work, please reply or jump on the Kubernetes slack channel