I had an instance of istio running fine via helm:
helm template --set kiali.enabled=true --set grafana.enabled=true install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio.yaml
during some unrelated issues, I had to nuke the istio namespace (kubectl delete ns istio-system)
however, after deleting ns, and redoing the istio install, Galley seems to be erroring out:
which in turn puts both istio-policy and istio-telemtry in a crash loop.
The logs for galley says:
fatal Invalid validationArgs: 1 error occurred:
* port number 443 must be in the range 1024..65535
whats a possible solution?
It seems to be that you need to perform an clean up for the whole Istio installation.
Please try uninstalling Istio on GKE and start from the scratch.
Uninstalling Istio on GKE
Related
I'm running the latest istio version on minishift. I can access the product page on http://192.168.178.102:31380/productpage.
Kiali is showing the traffic from istio-ingressgateway to productpage
Kiali Traffic pic
I expect to see some traffic from productpage to the other microservices but it does not show it.
Do you have any idea why?
These are my installation steps:
Minishift:
minishift config set hyperv-virtual-switch "External VM Switch"
minishift config set memory 8GB
minishift config set image-caching true
minishift config set cpus 4
minishift addon enable anyuid
minishift addon apply istio
minishift addon enable istio
minishift start
Book-info:
kubectl create namespace book-info
oc login -u system:admin
kubectl config set-context --current --namespace=book-info
kubectl label namespace book-info istio-injection=enabled
kubectl apply -f samples\bookinfo\platform\kube\bookinfo.YAML
kubectl get services
kubectl apply -f samples\bookinfo\networking\bookinfo-gateway.YAML
kubectl apply -f samples\bookinfo\networking\destination-rule-all.YAML
Thanks
any feedback is greatly appreciated.
I am trying to follow the instruction of AWS to create an ALB for EKS (Elastic K8s Services in AWS).
The instruction is here: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I have problems at step 7 (Install the controller manually). When I try to apply a yaml file in substep 7-b-c, I get an error:
Error from server (InternalError): error when creating "v2_0_0_full.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: x509: certificate is valid for ip-192-168-121-78.eu-central-1.compute.internal, not cert-manager-webhook.cert-manager.svc
Has anyone experienced similar kind of problem and what are the best ways to troubleshoot and solve the problem?
It seems that cert-manager doesn't run on Fargate as expected - #1606.
First option as a workaround is to install the helm chart which doesn't have the cert-manager dependency. Helm will generate the self-signed cert, and secret resources.
Different option is to remove all cert-manager stuff from the YAML manifest and provide your own self-signed certificate if you don't have helm as a dependency.
Take a look: alb-cert-manager, alb-eks-cert-manager.
Useful article: aws-fargate.
For EKS with Fargate, cert-manager-webhook server's port clashes with kubelet on the Fargate MicroVM.
Ref: https://github.com/jetstack/cert-manager/issues/3237#issuecomment-827523656
To remedy this, when installing the chart set the parameter webhook.securePort to a port that is not 10250 (e.g. 10260)
helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.3.1
--set webhook.securePort=10260
--set installCRDs=true
Or you could edit the cert-manager-webhook Deployment and Service to use this new port if cert-manager is already deployed.
As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?
I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –
Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h
From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies
I am unable to install kubectl on AWS ec2 instance(Amazon ami and ubuntu).
After installing kops and kubectl tried to check the version of kubectl but it is throwing the error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have already opened the ports, but still, I'm getting the same error.
I have installed Minikube also, but still, I am facing the same issue.
This is because your ~/.kube/config file is not correct. Configure it correctly so that you can connect to your cluster using kubectl.
Kubectl is the tool to control your cluster. It can be installed by Kops, for example.
If you already have the cluster to manage it from the host you did not use for the initialization, you should export your Kubeconfig by kops export kubecfg command on the node where you have the configured installation of kops.
If not, initialize the cluster first, and Kops will setup the Kubectl configuration for you automatically.
If you want to run with cluster,
You should try after getting token by kubeadm init, which gives advice that
-run:
sudo cp /etc/kubernetes/config $HOME/
sudo chown $(id -u):$(id -g) $HOME/config
export KUBECONFIG=$HOME/config
~/.kube/config is your missing file.