How to enable Istio SDS on existing GKE cluster - google-cloud-platform

I have an existing GKE cluster with the Istio addon installed, e.g.:
gcloud beta container clusters create istio-demo \
--addons=Istio --istio-config=auth=MTLS_PERMISSIVE \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
I am following this guide to install cert-manager in order to automatically provision TLS certificates from Let's Encrypt. According to the guide, Istio needs SDS enabled which can be done at the point of installation:
helm install istio.io/istio \
--name istio \
--namespace istio-system \
--set gateways.istio-ingressgateway.sds.enabled=true
As I already have Istio installed via GKE, how can I enable SDS on the existing cluster? Alternatively, is it possible to use the gcloud CLI to enable SDS at the point of cluster creation?

Managed Istio per design will revert any custom configuration and will disable SDS again. So, IMHO, it is a non-useful scenario. You can enable SDS manually following this guide, but keep in mind that the configuration will remain active only for 2-3 minutes.
Currently GKE doesn't support enabling SDS when creating a cluster from scratch. On GKE managed Istio, Google is looking to have the ability to enable SDS on GKE clusters, but they don't have an ETA yet for that release.
However, if you use non-managed (open source) Istio, SDS feature is in the Istio roadmap, and I think it should be available in version 1.2, but it is not a guarantee.

Even though currently the default ingress gateway created by Istio on GKE doesn't support SDS, you can add your own extra ingress gateway manually.
You can get the manifest of the default istio-ingressgateway deployment and service in your istio-system namespace and modify it to add the SDS container and change the name and then apply it to your cluster. But it's a little too tedious, there's a simpler way to do that:
First download the open-source helm chart of istio (choose a version that works with your Istio on GKE version, in my case my Istio on GKE is 1.1.3 and I downloaded open-source istio 1.1.17 and it works):
curl -O https://storage.googleapis.com/istio-release/releases/1.1.17/charts/istio-1.1.17.tgz
# extract under current working directory
tar xzvf istio-1.1.17.tgz
Then render the helm template for only the ingressgateway component:
helm template istio/ --name istio \
--namespace istio-system \
-x charts/gateways/templates/deployment.yaml \
-x charts/gateways/templates/service.yaml \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ingressgateway.sds.enabled=true > istio-ingressgateway.yaml
Then manually modify the rendered istio-ingressgateway.yaml file with following modifications:
Change the metadata.name for both the deployment and service to something else like istio-ingressgateway-sds
Change the metadata.lables.istio for both the deployment and service to something else like ingressgateway-sds
Change the spec.template.metadata.labels for the deployment similarly to ingressgateway-sds
Change the spec.selector.istio for the service to same value like ingressgateway-sds
Then apply the yaml file to your GKE cluster:
kubectl apply -f istio-ingressgateway.yaml
Holla! You have your own istio ingressgatway with SDS created now and you can get the load balancer IP of it by:
kubectl -n istio-system get svc istio-ingressgateway-sds
To let your Gateway to use the correct sds enabled ingressgateway you need to set spec.selector.istio to match the one you set. Below is an example of a Gateway resource using a kubernetes secret as TLS cert:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-test
spec:
selector:
istio: ingressgateway-sds
servers:
- hosts:
- '*.example.com'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*.example.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: example-com-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds

Per Carlos' answer, I decided not to use the Istio on GKE addon as there is very limited customization available when using Istio as a managed service.
I created a standard GKE cluster...
gcloud beta container clusters create istio-demo \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
And then manually installed Istio...
Create the namespace:
kubectl create namespace istio-system
Install the Istio CRDs:
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
Install Istio using the default configuration profile with my necessary customizations:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set gateways.enabled=true \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-ingressgateway.externalTrafficPolicy="Local" \
--set global.proxy.accessLogFile="/dev/stdout" \
--set global.proxy.accessLogEncoding="TEXT" \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set prometheus.enabled=true \
--set tracing.enabled=true \
| kubectl apply -f -
Enable Istio sidecar injection on default namespace
kubectl label namespace default istio-injection=enabled

Related

How to integrate Custom CA (AWS PCA) using Kubernetes CSR in Istio

am trying to setup Custom CA (AWS PCA) Integration using Kubernetes CSR in Istio following this doc (Istio / Custom CA Integration using Kubernetes CSR). Steps followed:
i) Enable feature gate on cert-manager controller: --feature-gates=ExperimentalCertificateSigningRequestControllers=true
ii) AWS PCA and aws-privateca-issuer plugin is already in place.
iii) awspcaclusterissuers object in place with AWS PCA arn (arn:aws:acm-pca:us-west-2:<account_id>:certificate-authority/)
iv) Modified Istio operator with defaultConfig and caCertificates of AWS PCA issuer (awspcaclusterissuers.awspca.cert-manager.io/)
v) Modified istiod deployment and added env vars (as mentioned in the doc along with cluster role).
istiod pod is failing with this error:
Generating K8S-signed cert for [istiod.istio-system.svc istiod-remote.istio-system.svc istio-pilot.istio-system.svc] using signer awspcaclusterissuers.awspca.cert-manager.io/cert-manager-aws-root-ca
2023-01-04T07:25:26.942944Z error failed to create discovery service: failed generating key and cert by kubernetes: no certificate returned for the CSR: "csr-workload-lg6kct8nh6r9vx4ld4"
Error: failed to create discovery service: failed generating key and cert by kubernetes: no certificate returned for the CSR: "csr-workload-lg6kct8nh6r9vx4ld4"
K8s Version: 1.22
Istio Version: 1.13.5
Note: Our integration of cert manager and AWS PCA works fine as we generate Private Certificates using cert-manager and PCA with ‘Certificates’ object. It’s the integration of kubernetes csr method with istio that is failing!
Would really appreciate if anybody with knowledge on this could help me out here as there are nearly zero docs on this integration.
I haven't done this with Kubernetes CSR, but I have done it with Istio CSR. Here are the steps to accomplish it with this approach.
Create a certificate in AWS Private CA and download the public root certificate either via the console or AWS CLI (aws acm-pca get-certificate-authority-certificate --certificate-authority-arn <certificate-authority-arn> --region af-south-1 --output text > ca.pem).
Create a secret to store this root certificate. Cert manager will use this public root cert to communicate with the root CA (AWS PCA).
Install cert-manager. Cert manager will essentially function as the intermediate CA in place of istiod.
Install AWS PCA Issuer plugin.
Make sure you have the necessary permissions in place for the workload to communicate with AWS Private CA. The recommended approach would be to use OIDC with IRSA. The other approach is to grant permissions to the node role. The problem with this is that any pod running on your nodes essentially has access to AWS Private CA, which isn't a least privilege approach.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "awspcaissuer",
"Action": [
"acm-pca:DescribeCertificateAuthority",
"acm-pca:GetCertificate",
"acm-pca:IssueCertificate"
],
"Effect": "Allow",
"Resource": "arn:aws:acm-pca:<region>:<account_id>:certificate-authority/<resource_id>"
}
]
}
Once the permissions are in place, you can create a cluster issuer or an issuer that will represent the root CA in the cluster.
apiVersion: awspca.cert-manager.io/v1beta1
kind: AWSPCAClusterIssuer
metadata:
name: aws-pca-root-ca
spec:
arn: <aws-pca-arn-goes-here>
region: <region-where-ca-was-created-in-aws>
Create the istio-system namespace
Install Istio CSR and update the Helm values for the issuer so that cert-manager knows to communicate with the AWS PCA issuer.
helm install -n cert-manager cert-manager-istio-csr jetstack/cert-manager-istio-csr \
--set "app.certmanager.issuer.group=awspca.cert-manager.io" \
--set "app.certmanager.issuer.kind=AWSPCAClusterIssuer" \
--set "app.certmanager.issuer.name=aws-pca-root-ca" \
--set "app.certmanager.preserveCertificateRequests=true" \
--set "app.server.maxCertificateDuration=48h" \
--set "app.tls.certificateDuration=24h" \
--set "app.tls.istiodCertificateDuration=24h" \
--set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \
--set "volumeMounts[0].name=root-ca" \
--set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \
--set "volumes[0].name=root-ca" \
--set "volumes[0].secret.secretName=istio-root-ca"
I would also recommend setting preserveCertificateRequests to true at least for the first time you set this up so that you can actually see the CSRs and whether or not the certificate were successfully issued.
When you install Istio CSR, this will create a certificate called istiod as well as a corresponding secret that stores the cert. The secret is called istiod-tls. This is the cert for the intermediate CA (Cert manager with Istio CSR).
9) Install Istio with the following custom configurations:
Update the CA address to Istio CSR (the new intermediate CA)
Disable istiod from functioning as the CA
Mount istiod with with the cert-manager certificate details
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
namespace: istio-system
spec:
profile: "demo"
hub: gcr.io/istio-release
values:
global:
# Change certificate provider to cert-manager istio agent for istio agent
caAddress: cert-manager-istio-csr.cert-manager.svc:443
components:
pilot:
k8s:
env:
# Disable istiod CA Sever functionality
- name: ENABLE_CA_SERVER
value: "false"
overlays:
- apiVersion: apps/v1
kind: Deployment
name: istiod
patches:
# Mount istiod serving and webhook certificate from Secret mount
- path: spec.template.spec.containers.[name:discovery].args[7]
value: "--tlsCertFile=/etc/cert-manager/tls/tls.crt"
- path: spec.template.spec.containers.[name:discovery].args[8]
value: "--tlsKeyFile=/etc/cert-manager/tls/tls.key"
- path: spec.template.spec.containers.[name:discovery].args[9]
value: "--caCertFile=/etc/cert-manager/ca/root-cert.pem"
- path: spec.template.spec.containers.[name:discovery].volumeMounts[6]
value:
name: cert-manager
mountPath: "/etc/cert-manager/tls"
readOnly: true
- path: spec.template.spec.containers.[name:discovery].volumeMounts[7]
value:
name: ca-root-cert
mountPath: "/etc/cert-manager/ca"
readOnly: true
- path: spec.template.spec.volumes[6]
value:
name: cert-manager
secret:
secretName: istiod-tls
- path: spec.template.spec.volumes[7]
value:
name: ca-root-cert
configMap:
defaultMode: 420
name: istio-ca-root-cert
If you want to watch a detailed walk-through on how the different components communicate, you can watch this video:
https://youtu.be/jWOfRR4DK8k
In the video, I also show the CSRs and the certs being successfully issued, as well as test that mTLS is working as expected.
The video is long, but you can skip to 17:08 to verify that the solution works.
Here's a repo with these same steps, the relevant manfiest files and architecture diagrams describing the flow: https://github.com/LukeMwila/how-to-setup-external-ca-in-istio

Cant deploy Ingress object on EKS: failed calling webhook vingress.elbv2.k8s.aws: the server could not find the requested resource

I am following this AWS guide: https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-fargate/ to setup my kubernetes cluster under ALB.
After installing the AWS ALB controller on my EKS cluster, following below steps:
helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=YOUR_CLUSTER_NAME \
--set serviceAccount.create=false \
--set region=YOUR_REGION_CODE \
--set vpcId=<VPC_ID> \
--set serviceAccount.name=aws-load-balancer-controller \
-n kube-system
I want to deploy my ingress configurations:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: 200,302
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
name: staging-ingress
namespace: staging
finalizers:
- ingress.k8s.aws/resources
spec:
rules:
- http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /api/v1/price
Everything looks fine. However, when I run the below command to deploy my ingress:
kubectl apply -f ingress.staging.yaml -n staging
I am having below error:
Error from server (InternalError): error when creating "ingress.staging.yaml": Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": the server could not find the requested resource
There are very few similar issues on Google an none was helping me. Any ideas of what is the problem?
K8s version: 1.18
the security group solved me:
node_security_group_additional_rules = {
ingress_allow_access_from_control_plane = {
type = "ingress"
protocol = "tcp"
from_port = 9443
to_port = 9443
source_cluster_security_group = true
description = "Allow access from control plane to webhook port of AWS load balancer controller"
}
}
I would suggest to take a look at the alb controller logs, the CRDs that you are using are for v1beta1 API group while the latest chart is registering v1 API group webhook aws-load-balancer-controller v2.4.0
If you look at the alb controller startup logs you should see a line similar to the below message
v1beta1
{"level":"info","ts":164178.5920634,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1beta1-ingress"}
v1
{"level":"info","ts":164683.0114837,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1-ingress"}
if that is the case you can fix the problem by using an earlier version of the controller or get the newer version for the CRDs

Kiali Bookinfo example is not showing traffic to rating and other microservices

I'm running the latest istio version on minishift. I can access the product page on http://192.168.178.102:31380/productpage.
Kiali is showing the traffic from istio-ingressgateway to productpage
Kiali Traffic pic
I expect to see some traffic from productpage to the other microservices but it does not show it.
Do you have any idea why?
These are my installation steps:
Minishift:
minishift config set hyperv-virtual-switch "External VM Switch"
minishift config set memory 8GB
minishift config set image-caching true
minishift config set cpus 4
minishift addon enable anyuid
minishift addon apply istio
minishift addon enable istio
minishift start
Book-info:
kubectl create namespace book-info
oc login -u system:admin
kubectl config set-context --current --namespace=book-info
kubectl label namespace book-info istio-injection=enabled
kubectl apply -f samples\bookinfo\platform\kube\bookinfo.YAML
kubectl get services
kubectl apply -f samples\bookinfo\networking\bookinfo-gateway.YAML
kubectl apply -f samples\bookinfo\networking\destination-rule-all.YAML
Thanks
any feedback is greatly appreciated.

Helm prometheus custom loadbalancer configuration on AWS

Hello and thank you in advance!
I have the following issue:
I'm trying to install prometheus over AWS EKS using Helm, but want to have an opportunity to configure AWS ELB to be private and available from inside my VPC(by default it's being created as a public LoadBalancer with FQDN).
When I execute following:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations{0}="service.beta.kubernetes.io/aws-load-balancer-internal":"0.0.0.0/0"
It creates a standard LoadBalancer service with no annotations included:
$ kubectl describe service/prometheus-server -n=prometheus
Name: prometheus-server
Namespace: prometheus
Labels: app=prometheus
chart=prometheus-11.7.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Selector: app=prometheus,component=server,release=prometheus
Type: LoadBalancer
IP: 10.100.255.81
I was playing around with quotes and other possible syntax variations but no luck. Please advise on the proper annotation usage.
It's kind of tricky, but you can do it like this:
helm install stable/prometheus --name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2" \
--set server.persistentVolume.storageClass="gp2" \
--set server.service.type=LoadBalancer \
--set server.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"="0.0.0.0/0"
You can see the format and limitation of set here in the Helm docs. For example,
--set nodeSelector."kubernetes\.io/role"=master
becomes:
nodeSelector:
kubernetes.io/role: master
✌️

Assign a Static Public IP to Istio ingress-gateway Loadbalancer service

As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?
I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –