kubernetes-dashboard I have got 404 error if I try to get access from external - istio

I'm trying to run dashboard in Kube v. 1.21 on AWS EKS
Kubernetes-dashboard work well using kubectl proxy.
But through Istio I have got 404 error.
https://jazz.lnk/kubedash/dashboard/
Perhaps URL ../dashboard/.. isn't correct? Also, I may get a mistake in configuration.
I run dashboard wit this configurations. https://gist.github.com/rmalenko/fc7c0f515cafa578f14dcafee1654f9c

Related

AWS EKS cluster with Istio sidecar auto inject problem and pod ext. db connection issue

I built a new cluster with Terraform for a AWS EKS, single node group with a single node.
This cluster is using 1.22 and cant seem to get anything to work correctly.
So Istio will install fine, i have installed versions 1.12.1, 1.13.2, 1.13.3 & 1.13.4 and all seem to have the same issue with auto injecting the sidecar.
Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": context deadline exceeded
But there are also other issues with the cluster, even without using Istio. My application is pulled and the pod will build fine but can not connect to the database. This is an external DB to the cluster - no other build (running on Azure) have any issues connecting to the DB
I am not sure if this is the issue with the application not connecting to the ext. DB but the sidecar issue could have something to do with BoundServiceAccountTokenVolume?
There is a warming about it being enabled on all clusters from 1.21 - a little odd as i have another applications with istio, running on another cluster with 1.21 on AWS EKS!
I also have this application running with istio without any issues in Azure on 1.22
I seem to have fix it :)
It seems to be a port issue with the security groups. I was letting terraform build its own group.
When I opened all the ports up in the 'inbound' section it seemed to work.
I then closed them all again and only opened 80 and 443 - which again stopped Istio from auto-injecting its sidecar
My app was requesting to talk to Istio on port 15017, so i opened just that port, along sided ports 80 and 443.
Once that port was opened, my app started to work and got the sidecar from Istio without any issue.
So it seems like the security group stops pod-to-pod communication... unless i have completely messed up my terraform build in some way

AWSEKS - Non Istio mesh Pod to pod connection issue after installing Istio 1.13.0

In kubernetes (AWS EKS) v1.20 have a default namespace with two pods, connected with a service type loadbalancer (CLB). Requesting the uri to the CLB worked fine and routed to either of the pods as required.
Post installation of 1.13.0 of Istio with istio-injection=enabled label set on a different namespace, the communication of the non-istio pods with no sidecar injection doesnt seem to work.
What I mean by doesnt work: (below 3 scenarios always worked without istio)
curl requests sent to https://default-nspods/apicall always worked with the non-istio pods.
i.e., the CLB always forwarded requests to to the 2 pods as required.
curl request after logging into the pod1 to pod2s IP worked and vice versa.
curl request to pod2 uri from the Node1 of pod1 worked and vice versa.
Post
Post installation, 2 and 3 doesnt work. The CLB also has trouble reading the nodeport of the pods at times.
Ive checked istioctl proxy-config endpoints and checked the deployments where the sidecar injection is enabled, the output doesnt show any other non mesh service/pod details.
Istio Version: 1.13.0
Ingress Gateway: Enabled (Loadbalancer mode)
Egress Gateway: Disabled
No addons like Kiali, Prometheus
Istio Operator based installation with modified yaml values.
Single cluster installation i.e., ISTIO_MESH_ROUTER_MODE='standard'
Istio pods, envoy sidecars, proxy-config dont show any errors.
Am kind of stuck, please let me know if I need to check kube-proxy, ip-tables or some where else.
Ive uninstalled istio using the "istioctl x uninstall --purge" option and re-installed , but the non-mesh pods seem to be not working now with Istio installed or not.
Istio pods and Istio injection namespace pods dont have issues.

AWS Cognito failing to authenticate after adding istio sidecar to pods

I added istio to my eks cluster. Sidecars are getting added to every pod and my Kiali dashboard is also up.
But after that I am not able to authenticate my APIs. I checked all the logs, came out to be that my pods are not able to connect to Cognito Server. I am getting following error:
Unhandled rejection TypeError: Unable to generate certificate due to
RequestError: Error: connect ECONNREFUSED 13.235.142.215:443
I went inside my pod to check if it can connect to any public DNS, I was able to ping google.com but not to aws.amazon.com
To crossverify, I removed istio from my cluster and it started working.
Got a github issue somewhat matching my issue, but that has also been closed without any solution (https://github.com/istio/istio/issues/10848).
Can anyone help me with this issue.
Thanks
Got the issue, my istio is trying to connect to aws cognito through ssl and it doesn't have certificates. Putting certificates in istio solved this.

Django with postgresql deployment on aws elastic beanstalk

I’ve been trying to deploy a django application with postgresql db on aws elastic beanstalk and i ran into many issues that i surfed and solved. Now the application uploads fine however the environment is still not green and I constantly am receiving 502 bad gateway nginx. I’ve checked nginx logs which say 111 connection refused etc. I’ve tried changed the port to 8001 from 8000 but didn’t work out. Somebody please guide me on how to deploy my application successfully.
Here are some of the common errors logfiles. Try checking them out
--- Common Dbug Errors ---
$ eb logs
--Files (after eb ssh):
$ eb ssh
1.sudo nano /var/log/cfn-init.log
2.sudo nano /var/log/cfn-init-cmd.log (see command output and error from config)
And are you using AWS RDS to for PostgreSQL?
Here is a detailed explanation and some common error fixes discuss in this blog

kubectl error while querying URL of service in minikube

I have been trying to run the echo server program (hello world) as part of leaning minikube with kubectl.
I was able to run and expose the service using the below commands..
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
However, while trying to get the URL minikube service hello-minikube --urlof the above service, got the below error:
Error: unknown command "service" for "kubectl"
did anyone face the similar issue ?
I have been following this document https://github.com/kubernetes/minikube
also, searched for the solution. I got a workaround timebeing with the following commands..
minikube ip
gives me the VM IP
kubectl get services
gives the port of the hello-minikube service. So, now I'm able to hit in the browser as http://192.168.99.100:32657/
Note: Your IP address may be different.
Approach 2:
Found and easy option - opens the service in browser itself.
minikube service hello-minikube
Hope this helps if someone faces similar issue.
What command are you running? You should be running
$ minikube service hello-minikube --url directly, without the kubectl prefix.
Use below command
.\minikube.exe service hello-minikube --url