I'm running the latest istio version on minishift. I can access the product page on http://192.168.178.102:31380/productpage.
Kiali is showing the traffic from istio-ingressgateway to productpage
Kiali Traffic pic
I expect to see some traffic from productpage to the other microservices but it does not show it.
Do you have any idea why?
These are my installation steps:
Minishift:
minishift config set hyperv-virtual-switch "External VM Switch"
minishift config set memory 8GB
minishift config set image-caching true
minishift config set cpus 4
minishift addon enable anyuid
minishift addon apply istio
minishift addon enable istio
minishift start
Book-info:
kubectl create namespace book-info
oc login -u system:admin
kubectl config set-context --current --namespace=book-info
kubectl label namespace book-info istio-injection=enabled
kubectl apply -f samples\bookinfo\platform\kube\bookinfo.YAML
kubectl get services
kubectl apply -f samples\bookinfo\networking\bookinfo-gateway.YAML
kubectl apply -f samples\bookinfo\networking\destination-rule-all.YAML
Thanks
any feedback is greatly appreciated.
Related
Bug Description
I'm using EKS (1.23) and ALB. ALB is terminating TLS with certs provided by ACM.
Using terraform I installed in EKS cluster following helm charts:
istio-base
istiod
gateway
all 1.15.0 version.
Other things configured on cluster:
aws_security_group_rules, both ingress and egress, on EKS nodes for ports 15000-15090
required k8s namespaces
required k8s ingress configuring ALB via alb-controller
required ACM certificates for ALB
required Route53 DNS entries
All those things are quite common so I do not think there is any weird stuff there. I have it in multiple places configured that way without Istio.
I also added some httpbin Service and Deployment and related Gateway and VirtualService.
In ingress I have 2 paths configured (besides ssl-redirect directive for ALB):
/healthz/ready is pointing to status-port
and then / is pointing to http2
Ingress-gateway service is NodePort type, as required for this type of setup.
(Important) There is 2 nodes in the cluster.
AWS console Target Group details page shows that 2/2 targets are healthy.
Sooooooo ...
When I enter address https://httpbin.somedomain.com every second request gets 504 Gateway Timeout. When I enter https://httpbin.somedomain.com/healthz/ready I get 200 every time. When I increase amount of nodes in cluster to 3, 504 occurs for 2 out of 3 requests.
It's quite clear to me, that it's related to ALB round robin over machines ... but why? status-port is 200 always.
Version
$ istioctl version
client version: 1.15.0
control plane version: 1.15.0
data plane version: 1.15.0 (3 proxies)
$ kubectl version --short
Client Version: v1.23.2
Server Version: v1.23.7-eks-4721010
$ helm version --short
v3.8.0+gd141386
Additional Information
$ istioctl bug-report
Target cluster context: v2-xxx
Running with the following config:
istio-namespace: istio-system
full-secrets: false
timeout (mins): 30
include: { }
exclude: { Namespaces: kube-node-lease,kube-public,kube-system,local-path-storage }
end-time: 2022-09-27 17:29:26.34498 +0200 CEST
Cluster endpoint: https://yyy.yl4.eu-west-1.eks.amazonaws.com
CLI version:
version.BuildInfo{Version:"1.15.0", GitRevision:"e3364ab424b70ca8ee1ca76cb0b3afb73476aaac", GolangVersion:"go1.19", BuildStatus:"Clean", GitTag:"1.15.0"}
The following Istio control plane revisions/versions were found in the cluster:
Revision default:
&version.MeshInfo{
{
Component: "pilot",
Info: version.BuildInfo{Version:"1.15.0", GitRevision:"e3364ab424b70ca8ee1ca76cb0b3afb73476aaac", GolangVersion:"go1.19", BuildStatus:"Clean", GitTag:"1.15.0"},
},
}
The following proxy revisions/versions were found in the cluster:
Revision default: Versions {1.15.0}
Fetching proxy logs for the following containers:
argocd//argo-cd-argocd-application-controller-0/application-controller
argocd/argo-cd-argocd-applicationset-controller/argo-cd-argocd-applicationset-controller-9dddcffbf-zrcgl/applicationset-controller
argocd/argo-cd-argocd-dex-server/argo-cd-argocd-dex-server-75c975ccb7-xmd82/dex-server
argocd/argo-cd-argocd-notifications-controller/argo-cd-argocd-notifications-controller-5854964cbf-z8nlr/notifications-controller
argocd/argo-cd-argocd-redis/argo-cd-argocd-redis-664b98cfd7-lndsf/argo-cd-argocd-redis
argocd/argo-cd-argocd-repo-server/argo-cd-argocd-repo-server-75f49f7ccf-xsblh/repo-server
argocd/argo-cd-argocd-server/argo-cd-argocd-server-6599d8d846-dqr6s/server
first/httpbin/httpbin-7bffdcffd-2klzj/httpbin
first/httpbin/httpbin-7bffdcffd-2klzj/istio-proxy
...
istio-ingress-internal/internal/internal-554ddcb684-kr52c/istio-proxy
istio-ingress-internet-facing/internet-facing/internet-facing-555fd48d8d-2tx74/istio-proxy
istio-system/istiod/istiod-86cd5997bb-r6797/discovery
...
Fetching Istio control plane information from cluster.
Running istio analyze on all namespaces and report as below:
Analysis Report:
Info [IST0102] (Namespace argocd) The namespace is not enabled for Istio injection. Run 'kubectl label namespace argocd istio-injection=enabled' to enable it, or 'kubectl label namespace argocd istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection.
Info [IST0118] (Service argocd/argo-cd-argocd-applicationset-controller) Port name webhook (port: 7000, targetPort: webhook) doesn't follow the naming convention of Istio port.
...
Creating an archive at /Users/zzz/bug-report.tar.gz.
Cleaning up temporary files in /var/folders/l4/82mt4l7x4r5dzp1j4ppxqqzm0000gn/T/bug-report.
Done.
Original issue here
I solved it with allowing port 80 to be allowed between machines in EKS node group. I do not understand why does it help TBH.
I have an existing GKE cluster with the Istio addon installed, e.g.:
gcloud beta container clusters create istio-demo \
--addons=Istio --istio-config=auth=MTLS_PERMISSIVE \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
I am following this guide to install cert-manager in order to automatically provision TLS certificates from Let's Encrypt. According to the guide, Istio needs SDS enabled which can be done at the point of installation:
helm install istio.io/istio \
--name istio \
--namespace istio-system \
--set gateways.istio-ingressgateway.sds.enabled=true
As I already have Istio installed via GKE, how can I enable SDS on the existing cluster? Alternatively, is it possible to use the gcloud CLI to enable SDS at the point of cluster creation?
Managed Istio per design will revert any custom configuration and will disable SDS again. So, IMHO, it is a non-useful scenario. You can enable SDS manually following this guide, but keep in mind that the configuration will remain active only for 2-3 minutes.
Currently GKE doesn't support enabling SDS when creating a cluster from scratch. On GKE managed Istio, Google is looking to have the ability to enable SDS on GKE clusters, but they don't have an ETA yet for that release.
However, if you use non-managed (open source) Istio, SDS feature is in the Istio roadmap, and I think it should be available in version 1.2, but it is not a guarantee.
Even though currently the default ingress gateway created by Istio on GKE doesn't support SDS, you can add your own extra ingress gateway manually.
You can get the manifest of the default istio-ingressgateway deployment and service in your istio-system namespace and modify it to add the SDS container and change the name and then apply it to your cluster. But it's a little too tedious, there's a simpler way to do that:
First download the open-source helm chart of istio (choose a version that works with your Istio on GKE version, in my case my Istio on GKE is 1.1.3 and I downloaded open-source istio 1.1.17 and it works):
curl -O https://storage.googleapis.com/istio-release/releases/1.1.17/charts/istio-1.1.17.tgz
# extract under current working directory
tar xzvf istio-1.1.17.tgz
Then render the helm template for only the ingressgateway component:
helm template istio/ --name istio \
--namespace istio-system \
-x charts/gateways/templates/deployment.yaml \
-x charts/gateways/templates/service.yaml \
--set gateways.istio-egressgateway.enabled=false \
--set gateways.istio-ingressgateway.sds.enabled=true > istio-ingressgateway.yaml
Then manually modify the rendered istio-ingressgateway.yaml file with following modifications:
Change the metadata.name for both the deployment and service to something else like istio-ingressgateway-sds
Change the metadata.lables.istio for both the deployment and service to something else like ingressgateway-sds
Change the spec.template.metadata.labels for the deployment similarly to ingressgateway-sds
Change the spec.selector.istio for the service to same value like ingressgateway-sds
Then apply the yaml file to your GKE cluster:
kubectl apply -f istio-ingressgateway.yaml
Holla! You have your own istio ingressgatway with SDS created now and you can get the load balancer IP of it by:
kubectl -n istio-system get svc istio-ingressgateway-sds
To let your Gateway to use the correct sds enabled ingressgateway you need to set spec.selector.istio to match the one you set. Below is an example of a Gateway resource using a kubernetes secret as TLS cert:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-test
spec:
selector:
istio: ingressgateway-sds
servers:
- hosts:
- '*.example.com'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*.example.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: example-com-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
Per Carlos' answer, I decided not to use the Istio on GKE addon as there is very limited customization available when using Istio as a managed service.
I created a standard GKE cluster...
gcloud beta container clusters create istio-demo \
--cluster-version=[cluster-version] \
--machine-type=n1-standard-2 \
--num-nodes=4
And then manually installed Istio...
Create the namespace:
kubectl create namespace istio-system
Install the Istio CRDs:
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
Install Istio using the default configuration profile with my necessary customizations:
helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set gateways.enabled=true \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-ingressgateway.externalTrafficPolicy="Local" \
--set global.proxy.accessLogFile="/dev/stdout" \
--set global.proxy.accessLogEncoding="TEXT" \
--set grafana.enabled=true \
--set kiali.enabled=true \
--set prometheus.enabled=true \
--set tracing.enabled=true \
| kubectl apply -f -
Enable Istio sidecar injection on default namespace
kubectl label namespace default istio-injection=enabled
I am trying to deploy microservices architecture on kubernetes cluster, do any one knows how to create ingress for AWS.
I recommend you use the ALB Ingress Controller https://github.com/kubernetes-sigs/aws-alb-ingress-controller, as it is recommended by AWS and creates Application Load Balancers for each Ingress.
Alternatively, know that you can use any kind of Ingress, such as Nginx, in AWS. You will create the Nginx Service of type LoadBalancer, so that all requests to that address are redirected to Nginx. Nginx itself will take care to redirect the requests to the correct service inside Kubernetes.
To create an Ingress Resource we first need to deploy Ingress Controller. Ingress Controller can be very easily deployed using helm. Follow the below steps to install Helm and ingress Controller:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ Kubectl createserviceaccount --namespace kube-system tiller
$ Kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-acount=tiller
$ helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
Once Ingress Controller is installed check it by running kubectl get pods and you should see 2 pods running. One is the Ingress Controller and the second is Default Backend.
And now if you will go to your AWS Management Console, you should see an Elastic Load Balancer running which routes traffic to ingress controller which in turn routes traffic to appropriate services based on appropriate rules.
To test Ingress Follow Steps 1 to 4 of this link here: Setting up HTTP Load Balancing with Ingress
Hope this helps!
Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h
From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies
I'm trying to install kubernetes dashboard on AWS Linux image but I'm getting JSON output on the browser. I have run the dashboard commands and given token but it did not work.
Kubernetes 1.14+
1) Open terminal on your workstation: (standard ssh tunnel to port 8002)
$ ssh -i "aws.pem" -L 8002:localhost:8002 ec2-user#ec2-50-50-50-50.eu-west-1.compute.amazonaws.com
2) When you are connected type:
$ kubectl proxy -p 8002
3) Open the following link with a web browser to access the dashboard endpoint: http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Try this:
$ kubectl proxy
Open the following link with a web browser to access the dashboard endpoint:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
More info
I had similar issue with reaching the dashboard following your linked tutorial.
One way of approaching your issue is to change the type of the service to LoadBalancer:
Exposes the service externally using a cloud provider’s load balancer.
NodePort and ClusterIP services, to which the external load balancer
will route, are automatically created.
For that use:
kubectl get services --all-namespaces
kubectl edit service kubernetes-dashboard -n kube-system -o yaml
and change the type to LoadBalancer.
Wait till the the ELB gets spawned(takes couple of minutes) and then run
kubectl get services --all-namespaces again and you will see the address of your dashboard service and you will be able to reach it under the “External Address”.
As for the tutorial you have posted it is from 2016, and it turns out something went wrong with the /ui in the address url, you can read more about it in this github issue. There is a claim that you should use /ui after authentication, but it also does not work.
For the default settings of ClusterIP you will be able to reach the dashboard on this address:
‘YOURHOSTNAME’/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Another option is to delete the old dashboard:
Kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Install the official one:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Run kubectl proxy and reach it on localhost using:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview