Create Istio Ingress-gateway POD without creating istiod - istio

I am bit new to istio and still learning. I have a use-case in which Istio is already deployed in istio-system namespace but I need to deploy istio ingress-gateway Pod in test-ns namespace using istioOperator. I am using istio 1.6.7.
From Istio docs, its mentioned to run this cmd:
istioctl manifest apply --set profile=default --filename=istio-ingress-values.yaml but this will create istiod Pods in istio-system which i donot want since its already created.
So, I ran below cmds to just create Ingress Gateway POD but can;t see any Pods or services created in test-ns. Kindly help if this is possible
kubectl apply -f istio-ingress-values.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: test-ns
name: testoperator
ingressGateways:
- enabled: true
name: istio-ingressgateway
namespace: test-ns
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
hpaSpec:
maxReplicas: 5
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
resources: {}
service:
ports:
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443

In Istio it is possible to tune configuration profiles.
As I can see, you are using the default profile, so I will describe how you can tune this configuration profile to create istio-ingressgateway in the test-ns namespace.
We can display the default profile settings by running the istioctl profile dump default command.
First, I saved these default settings in the default_profile_dump.yml file:
# istioctl profile dump default > default_profile_dump.yml
And then I modified this file:
NOTE: I only added one line: namespace: test-ns.
...
ingressGateways:
- enabled: true
name: istio-ingressgateway
namespace: test-ns
...
After modifying default settings of the ingressGateways, I applied these new settings:
# istioctl manifest apply -f default_profile_dump.yml
This will install the Istio 1.9.1 default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
- Pruning removed resources Removed HorizontalPodAutoscaler:istio-system:istio-ingressgateway.
Removed PodDisruptionBudget:istio-system:istio-ingressgateway.
Removed Deployment:istio-system:istio-ingressgateway.
Removed Service:istio-system:istio-ingressgateway.
Removed ServiceAccount:istio-system:istio-ingressgateway-service-account.
Removed RoleBinding:istio-system:istio-ingressgateway-sds.
Removed Role:istio-system:istio-ingressgateway-sds.
✔ Installation complete
Finally, we can check where istio-ingressgateway was deployed:
# kubectl get pod -A | grep ingressgateway
test-ns istio-ingressgateway-7fc7c7c-r92tw 1/1 Running 0 33s
The istiod Deployment remained intact in the istio-system namespace:
# kubectl get deploy,pods -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/istiod 1/1 1 1 51m
NAME READY STATUS RESTARTS AGE
pod/istiod-64675984c5-xl97n 1/1 Running 0 51m

Related

EKS Service deployment not updating?

When I try to apply a new Service deployment yaml in AWS EKS, it does not delete the old load balancer from the previous build/deploy
apiVersion: v1
kind: Service
metadata:
# The name must be equal to KubernetesConnectionConfiguration.serviceName
name: ignite-service
# The name must be equal to KubernetesConnectionConfiguration.namespace
namespace: ignite
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
targetPort: 8080
- name: thinclients
port: 10800
targetPort: 10800
# Optional - remove 'sessionAffinity' property if the cluster
# and applications are deployed within Kubernetes
# sessionAffinity: ClientIP
selector:
# Must be equal to the label set for pods.
app: ignite
status:
loadBalancer: {}
I had a situation where I deployed a ELB, but this time a NLB, but it would not destroy the previous ELB.
Is there a way to when applying the k8s manifest, the old load balancer on AWS gets deleted?
Not sure what you are using to create the resources helm, resources or kubectl however consider the kubectl as of now
you can use the
kubectl replace --force -f <service-filename>.yaml

How can I change config of istiod deployment using istio-operator?

I am setting up istio controlplane using istio-operator on an EKS cluster with calico CNI. After installing istio on the cluster, I got to know that new pods are not coming up and the reason I got after googling is given below:
Istio Installation successful but not able to deploy POD
Now, I want to apply a change hostNetwork: true under spec.template.spec to istiod deployment using the istio-operator only.
I did some more googling to change or override the values of istiod deployment and got the following yamls files:
https://github.com/istio/istio/tree/ca541df418d0902ebeb9506c84d24c6bd9743801/operator/cmd/mesh/testdata/manifest-generate/input
But they are also not working. Below is the last configurations I have applied:
kind: IstioOperator
metadata:
namespace: istio-system
name: zeta-zone-istiocontrolplane
spec:
profile: minimal
values:
pilot:
resources:
requests:
cpu: 222m
memory: 333Mi
hostNetwork: true
unvalidatedValues:
hostNetwork: true
Can anybody help me to add hostNetwork: true under spec.template.spec to istiod deployment using the istio-operator only?
I was able to achieve that using the following YAML after a lot of hit and trials and checking logs of istio-operator:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: minimal
hub: docker.io/istio
tag: 1.10.3
meshConfig:
rootNamespace: istio-system
components:
base:
enabled: true
pilot:
enabled: true
namespace: istio-system
k8s:
overlays:
- kind: Deployment
name: istiod
patches:
- path: spec.template.spec.hostNetwork
value: true # OVERRIDDEN

How Can I Detect micro-service Failure in Istio and Automatic recovery selection

I'm new in Istio.
My question is how can I detect failures in services that are already running in istio?
and if there is a failure, how to define particular percentage of traffic to a new version of a service?
thanks.
I recommend using Kiali. Kiali helps you understand the structure and health of your service mesh by monitoring traffic flow and report.
Kiali is a management console for an Istio-based service mesh. It provides dashboards, observability, and lets you operate your mesh with robust configuration and validation capabilities. It shows the structure of your service mesh by inferring traffic topology and displays the health of your mesh. Kiali provides detailed metrics, powerful validation, Grafana access, and strong integration for distributed tracing with Jaeger.
Detailed documentation for installing Kiali can be found in the Installation Guide.
I have created a simple example to demonstrate how useful Kiali is.
First, I created a db-app application with two available versions (v1 and v2) and exposed it using single service:
# cat db-app.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: db-app
name: db-app
namespace: default
spec:
ipFamilies:
- IPv4
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: db-app
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db-app
version: v1
name: db-app-v1
spec:
replicas: 1
selector:
matchLabels:
app: db-app
version: v1
template:
metadata:
labels:
app: db-app
version: v1
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db-app
version: v2
name: db-app-v2
spec:
replicas: 1
selector:
matchLabels:
app: db-app
version: v2
template:
metadata:
labels:
app: db-app
version: v2
spec:
containers:
- image: nginx
name: nginx
# kubectl apply -f db-app.yml
service/db-app created
deployment.apps/db-app-v1 created
deployment.apps/db-app-v2 created
# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/db-app-v1-59c8fb999c-bs47s 2/2 Running 0 39s
pod/db-app-v2-56dbf4c8d6-q24vm 2/2 Running 0 39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db-app ClusterIP 10.102.36.142 <none> 80/TCP 39s
Additionally, to illustrate how we can split the traffic, I generated some traffic to the db-app application:
# kubectl run test-traffic --image=nginx
pod/test-traffic created
# kubectl exec -it test-traffic -- bash
root#test-traffic:/# for i in $(seq 1 100000); do curl 10.102.36.142; done
...
Now in Kiali UI in the Graph section we can see traffic flow:
In the Services section, we can easily split traffic between the v1 and v2 versions using the Traffic Shifting Wizard:
NOTE: Detailed tutorial can be found in the Kiali Traffic Shifting tutorial.
We can also monitor the status of our application. As an example, I broke the v1 version:
# kubectl set image deployment/db-app-v1 nginx=nnnginx
deployment.apps/db-app-v1 image updated
In Kiali UI we see errors in the v1 version:
I suggest you read the Kali Official Tutorial to learn the full capabilities of Kali.

Istio, no listener registered when ports are the same

I have an EKS cluster with 2 EC2 nodes. I want to use Istio with ALB not the classic ELB, so I modified the gateway from the Istio helm chart to use NodePort like this:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
labels:
app: istio-ingressgateway
istio: ingressgateway
release: istio
istio.io/rev: default
install.operator.istio.io/owning-resource: unknown
operator.istio.io/component: "IngressGateways"
spec:
type: NodePort
selector:
app: istio-ingressgateway
istio: ingressgateway
ports:
-
name: status-port
port: 15021
protocol: TCP
nodePort: 32767
-
name: http2
port: 80
protocol: TCP
nodePort: 31231
-
name: https
port: 443
protocol: TCP
nodePort: 31312
Also, I added the Ingress for the gateway:
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
namespace: istio-system
name: aws-load-balancer
spec:
controller: ingress.k8s.aws/alb
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: istio-system
name: ingress
labels:
app: ingress
annotations:
alb.ingress.kubernetes.io/healthcheck-port: "32767"
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/subnets: subnet-foo,subnet-bar
spec:
ingressClassName: aws-load-balancer
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
The ALB and the TargetGroup are created as expected, the nodes are healthy according to the TargetGroup health check.
The sample bookinfo stack and gateway are installed to a labeled namesapce
% kubectl get ns bookinfo --show-labels
NAME STATUS AGE LABELS
bookinfo Active 18h istio-injection=enabled
Istioctl shows the proxy status
% istioctl proxy-status
NAME CDS LDS EDS RDS ISTIOD VERSION
details-v1-79f774bdb9-2scfv.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-75c795985d-pwx9j 1.10.0
istio-ingressgateway-8579cc48f8-2d5sd.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-75c795985d-pwx9j 1.10.0
productpage-v1-6b746f74dc-l795c.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-75c795985d-pwx9j 1.10.0
ratings-v1-b6994bb9-l2vcp.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-75c795985d-pwx9j 1.10.0
reviews-v1-545db77b95-shzkj.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-75c795985d-pwx9j 1.10.0
reviews-v2-7bf8c9648f-6k6mk.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-75c795985d-pwx9j 1.10.0
reviews-v3-84779c7bbc-6mw5f.bookinfo SYNCED SYNCED SYNCED SYNCED istiod-75c795985d-pwx9j 1.10.0
But when I try to reach it it gives back 502.
% curl http://internal-k8s-istiosys-ingress-foo-bar.eu-west-1.elb.amazonaws.com/productpage
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
Istio version: 1.10
Kubernetes version: 1.19
EKS version: eks.5
Edit:
It turned out there are no listeners attached:
% istioctl proxy-config listeners -n istio-system istio-ingressgateway-8579cc48f8-2d5sd.istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
However, if I change a port for the Gateway from 80 to 9000, the listeners created but it is need to match with the ingress-gateway port
% istioctl proxy-config listeners -n istio-system istio-ingressgateway-8579cc48f8-qzn59
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9000 ALL Route: http.9000
0.0.0.0 15021 ALL Inline Route: /healthz/ready*
0.0.0.0 15090 ALL Inline Route: /stats/prometheus*
If anybody faces the same issue, it turned out that the default istio ingress gateway cannot bind to 80 since it is an unprivileged pod, updated the deployment specification, and now up and running.

Enabling SSL on GKE endpoints not working correctly

I created API on GKE using cloud endpoints. It is working fine without Https You can try it here API without Https
I followed the instructions which mention here Enabling SSL for cloud endpoint after setup everything which is mention in this page I'm able to access my endpoints with Https but with a warning.
Your connection is not private - Back to Safety (Chrome)
Check it here API with Https
Can you please let me know what I'm missing
Update
I'm using Google-managed SSL certificates for cloud endpoints in GKE.
I followed the steps which are mention in this doc but not able to successfully add SSL Certificate.
When I go in my cloud console I see
Some backend services are in UNKNOWN state
Here are my development yaml's
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: quran-grpc
spec:
ports:
- port: 81
targetPort: 9000
protocol: TCP
name: rpc
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: quran-grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: quran-grpc
spec:
replicas: 1
selector:
matchLabels:
app: quran-grpc
template:
metadata:
labels:
app: quran-grpc
spec:
volumes:
- name: nginx-ssl
secret:
secretName: nginx-ssl
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8080",
"--ssl_port=443",
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=quran.endpoints.utopian-button-227405.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- containerPort: 8080
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: nginx-ssl
readOnly: true
- name: python-grpc-quran
image: gcr.io/utopian-button-227405/python-grpc-quran:5.0
ports:
- containerPort: 50051
ssl-cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: quran-ssl
spec:
domains:
- quran.endpoints.utopian-button-227405.cloud.goog
---
apiVersion: v1
kind: Service
metadata:
name: quran-ingress-svc
spec:
selector:
name: quran-ingress-svc
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: quran-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 34.71.56.199
networking.gke.io/managed-certificates: quran-ssl
spec:
backend:
serviceName: quran-ingress-svc
servicePort: 80
Can you please let me know what I'm doing wrong.
Your SSL configuration is working fine, and the reason you are receiving this error is because you are using a self-signed certificate.
A self-signed certificate is a certificate that is not signed by a certificate authority (CA). These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a CA aim to provide. For instance, when a website owner uses a self-signed certificate to provide HTTPS services, people who visit that website will see a warning in their browser.
To solve this issue you should buy a valid certificate from a trusted CA, or use Let's Encrypt that will give a certificated valid for 90 days, and after this period you can renew this certificate.
If you decide to buy a SSL certificate, you can follow the document you describe to create a Kubernetes secret and use in your ingress, simple as that.
But if you don't want to buy a certificate, you could install cert-manager in your cluster, it will help you to generate valid certificates using Let's Encrypt.
Here is an example of how to use cert-manager + Let's Encrypt solution to generate valid SSL certificates:
Using cert-manager with Let's Encrypt
cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This makes it possible to provide 'certificates as a service' to developers working within your Kubernetes cluster.
Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge. The certificate is valid for 90 days, during which renewal can take place at any time.
I'm supossing you already have NGINX ingress installed and working.
Pre-requisites:
- NGINX Ingress installed and working
- HELM 3.0 installed and working
cert-manager install
Note: When running on GKE (Google Kubernetes Engine), you may encounter a ‘permission denied’ error when creating some of these resources. This is a nuance of the way GKE handles RBAC and IAM permissions, and as such you should ‘elevate’ your own privileges to that of a ‘cluster-admin’ before running the above command. If you have already run the above command, you should run them again after elevating your permissions:
Follow the official docs to install, or just use HELM 3.0 with the followe command:
$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager-legacy.crds.yaml
Create CLusterIssuer for Let's Encrypt: Save the content below in a new file called letsencrypt-production.yaml:
Note: Replace <EMAIL-ADDRESS> with your valid email.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
labels:
name: letsencrypt-prod
name: letsencrypt-prod
spec:
acme:
email: <EMAIL-ADDRESS>
http01: {}
privateKeySecretRef:
name: letsencrypt-prod
server: 'https://acme-v02.api.letsencrypt.org/directory'
Apply the configuration with:
kubectl apply -f letsencrypt-production.yaml
Install cert-manager with Let's Encrypt as a default CA:
helm install cert-manager \
--namespace cert-manager --version v0.8.1 jetstack/cert-manager \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer
Verify the installation:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
Using cert-manager
Apply this annotation in you ingress spec:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
After apply cert-manager will generate the tls certificate fot the domain configured in Host: like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
rules:
- host: myapp.domain.com
http:
paths:
- path: "/"
backend:
serviceName: my-app
servicePort: 80