Kubernetes dashboard in aws EC2 instance? - amazon-web-services

I have started 2 ubuntu 16 EC2 instance(one for master and other for worker). Everything working OK.
I need to setup dashboard to view on my machine. I have copied admin.ctl and executed the script in my machine's terminal
kubectl --kubeconfig ./admin.conf proxy --address='0.0.0.0' --port=8002 --accept-hosts='.*'
Everything is fine.
But in browser when I use below link
http://localhost:8002/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I am getting Error: 'dial tcp 192.168.1.23:8443: i/o timeout'
Trying to reach: 'https://192.168.1.23:8443/'
I have enabled all traffics in security policy for aws. What am I missing? Please point me a solution

If you only want to reach the dashboard then it is pretty easy, get the IP address of your EC2 instance and the Port on which it is serving dashboard (kubectl get services --all-namespaces) and then reach it using:
First:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
And in your browswer:
http://<IP>:<PORT>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Note that this is a possible security vulnerability as you are accepting all traffic (AWS firewall rules) and also all connections for your kubectl proxy (--address 0.0.0.0 --accept-hosts '.*') so please narrow it down or use different approach. If you have more questions feel free to ask.

Have you tried putting http:// in front of localhost?
I don't have enough rep to comment, else I would.

For bypassing dashboard with token. You have to execute the below code
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
After this you can skip without providing token. But this will cause security issues.

Related

403 Forbidden on ESPv2, GKE AutoPilot, WIF

I'm following the Getting started with Endpoints for GKE with ESPv2. I'm using Workload Identity Federation and Autopilot on the GKE cluster.
I've been running into the error:
F0110 03:46:24.304229 8 server.go:54] fail to initialize config manager: http call to GET https://servicemanagement.googleapis.com/v1/services/name:bookstore.endpoints.<project>.cloud.goog/rollouts?filter=status=SUCCESS returns not 200 OK: 403 Forbidden
Which ultimately leads to a transport failure error and shut down of the Pod.
My first step was to investigate permission issues, but I could really use some outside perspective on this as I've been going around in circles on this.
Here's my config:
>> gcloud container clusters describe $GKE_CLUSTER_NAME \
--zone=$GKE_CLUSTER_ZONE \
--format='value[delimiter="\n"](nodePools[].config.oauthScopes)'
['https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring',
'https://www.googleapis.com/auth/service.management.readonly',
'https://www.googleapis.com/auth/servicecontrol',
'https://www.googleapis.com/auth/trace.append']
>> gcloud container clusters describe $GKE_CLUSTER_NAME \
--zone=$GKE_CLUSTER_ZONE \
--format='value[delimiter="\n"](nodePools[].config.serviceAccount)'
default
default
Service-Account-Name: test-espv2
Roles
Cloud Trace Agent
Owner
Service Account Token Creator
Service Account User
Service Controller
Workload Identity User
I've associated the WIF svc-act with the Cluster with the following yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
iam.gke.io/gcp-service-account: test-espv2#<project>.iam.gserviceaccount.com
name: test-espv2
namespace: eventing
And then I've associated the pod with the test-espv2 svc-act
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: esp-grpc-bookstore
namespace: eventing
spec:
replicas: 1
selector:
matchLabels:
app: esp-grpc-bookstore
template:
metadata:
labels:
app: esp-grpc-bookstore
spec:
serviceAccountName: test-espv2
Since the gcr.io/endpoints-release/endpoints-runtime:2 is limited,
I created a test container and deployed it into the same eventing namespace.
Within the container, I'm able to retrieve the endpoint service config with the following command:
curl --fail -o "service.json" -H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://servicemanagement.googleapis.com/v1/services/${SERVICE}/configs/${CONFIG_ID}?view=FULL"
And also within the container, I'm running as the impersonated service account, tested with:
curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/
Are there any other tests I can run to help me debug this issue?
Thanks in advance,
Around debugging - I've often found my mistakes by following one of the other methods/programming languages in the Google tutorials.
Have you looked at the OpenAPI notes and tried to follow along?
I've finally figured out the issue. It was in 2 parts.
Redeployment of app, paying special attention and verification of the kubectl annotate serviceaccount commands
add-iam-policy-binding for both serviceController and cloudtrace.agent
omitting nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" due to Autopilot
Doing this enabled a successful kube deployment as displayed by the logs.
Next error I had was
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
This was fixed by turning my attention back to my Kube cluster.
Looking through the events in my ingress service, since I was in a shared-vpc and my security policies only allowed firewall management from the host project, the deployment was failing to update the firewall rules.
Manually provisioning them, as shown here :
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#manually_provision_firewall_rules_from_the_host_project
solved my issues.

Failed to create an ALB for my EKS cluster?

I deployed a EKS cluster in AWS. I'd like to create a ALB infront of my cluster. I use below command:
eksctl create iamserviceaccount --namespace default --name alb-ingress-controller --cluster $componentName --attach-policy-arn $servicePolicyArn --approve --override-existing-serviceaccounts to create a service account.
below is the ingress I created in k8s:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: es-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: es-entrypoint
servicePort: 80
After apply the config, I got an empty address when run:
$ kubectl get ingress/es-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
es-ingress <none> * 80 2d5h
I am able to see the service account:
$ kubectlaws get serviceaccount alb-ingress-controller
NAME SECRETS AGE
alb-ingress-controller 1 31h
what did I do wrong?
first of all it would be good to know how did you setup the ALB.
A Service Account is not required, instead you need to have an Ingress Controller. Without that, your Ingress resource is useless. There are a lot of different Ingress Controllers, one of the easiest is the ingress-nginx. But just check this awesome comparison: https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit#gid=907731238
So, the easiest way is
Setup an Ingress Controller
Configure the Ingress Controller as NodePort with a port like 30080
Setup your ALB and configure the AWS Target Group to use the NodePort with 30080
Setup the Ingress resource like above (and you dont need the wildcard in the path)
Now all the traffic from the ALB will redirected to your NodePort (the Ingress Controller). The Ingress ressource is responsible to configure the Nginx configuration inside the Ingress Controller. Thats it!
The whole traffic flow could look like this
If this doesnt work
Check the Ingress Controller logs kubectl logs --follow --namespace YOURNAMESPACE NAMEOFTHEINGRESSCONTROLLER
If you dont get any logs there, just enable the AWS ALB logs and check them
I hope that was helpful, if not, please provide some more detailed information about your infrastructure.

404 not found for GKE Ingress

I am trying with Ingress feature in GKE Cluster` . Following are the steps I followed
1. Create deployment with below command
kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
2. Exposed the deployment of type NodePort
kubectl expose deployment hello --port=8080 --type=NodePort
3. my ingress manifests is as follows
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello
servicePort: 8080
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.0.41.132 <None> 8080:30820/TCP 113m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 35.X.X.X 80 26m
But when I access the external IP using curl , it throws 404 not found .
Below error can be seen from GKE Console
I think I am missing something in the ingress definition . Please guide to fix this.
Image definition has been taken from this guide
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
I have tried to create the same ingress from the scratch (none cluster, none ingress service, none service), and I was able to create it and perform a curl successfully, these were the steps:
1.- Create a cluster (It does not matter the details, just create it as you want)
2.- Connect to the cluster and install kubectl-> sudo apt-get install kubectl
3.- kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
4.- kubectl expose deployment hello --port=8080 --type=NodePort
5.- Create the ingress as follows (Without annotations), as per Creating an Ingress resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
6.- Review your ingress kubectl get ingress basic-ingress
#cloudshell:$ kubectl get ingress basic-ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 130.211.xx.xxx 80 5m46s
7.- And now is working when I have performed the curl:
#cloudshell:$ curl http://130.211.xx.xxx
Hello, world!
Version: 2.0.0
Hostname: hello-86dbf5b7c6-f7qgl
You were using ingress annotations, and it is another way to create ingress services, but a little bit more advanced. My suggestion is to create it as simple as possible first.
Please try it at this way and let me know about it.
The same YAML definitions are failing for me in a SharedVPC . This got resolved after adding the below firewall rule
gcloud compute firewall-rules create k8s-fw-l7--60cada75751e6d79 --network <SharedVPC> --description "GCE L7 firewall rule" --allow tcp:30000-32767 --source-ranges 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 --target-tags gke-privatetestgkecluster-cf899a18-node --project <Project>
https://cloud.google.com/load-balancing/docs/health-checks

AWS EKS Ingress - No Address

I have a kubernetes application using AWS EKS. With the below details:
Cluster:
+ Kubernetes version: 1.15
+ Platform version: eks.1
Node Groups:
+ Instance Type: t3.medium
+ 2(Minimum) - 2(Maximum) - 2(Desired) configuration
[Pods]
+ 2 active pods
[Service]
+ Configured Type: ClusterIP
+ metadata.name: k8s-eks-api-service
[rbac-role.yaml]
https://pastebin.com/Ksapy7vK
[alb-ingress-controller.yaml]
https://pastebin.com/95CwMtg0
[ingress.yaml]
https://pastebin.com/S3gbEzez
When I tried to pull the ingress details. Below are the values (NO ADDRESS)
Host: *
ADDRESS:
My goal is to know why the address has no value. I expect to have private or public address to be used by other service on my application.
solution fitted my case is adding ingressClassName in ingress.yaml or configure default ingressClass.
add ingressClassName in ingress.yaml
#ingress.yaml
metadata:
name: ingress-nginx
...
spec:
ingressClassName: nginx <-- add this
rules:
...
or
edit ingressClass yaml
$ kubectl edit ingressclass <ingressClass Name> -n <ingressClass namespace>
#ingressClass.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true" <-- add this
....
link
In order for your kubernetes cluster to be able to get an address you will need to be able to manage route53 from withtin the cluster, for this task I would recommend to use externalDns.
In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
source: ExternalDNS
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.

kiali showing unkown traffic via sending through ambassador

I have installed service mesh(Istio) and working with Ambassador to route traffic to our application. Whenever I am sending traffic through Istio Ingress its working fine and working with the ambassador but when sending through Ambassador, It is showing unknown, You can see on the attached image, could be related to the fact that the ambassador does not use an Istio sidecar.
Used code to deploy Ambassador service:
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: ambassador-http
port: 80
targetPort: 8080
selector:
service: ambassador
---
Is there anything to I can add here to make it possible?
Thanks
Yes, it is possible and here is detailed guide for this from Abmassador documentation:
Getting Ambassador Working With Istio
Getting Ambassador working with Istio is straightforward. In this example, we'll use the bookinfo sample application from Istio.
Install Istio on Kubernetes, following the default instructions (without using mutual TLS auth between sidecars)
Next, install the Bookinfo sample application, following the instructions.
Verify that the sample application is working as expected.
By default, the Bookinfo application uses the Istio ingress. To use Ambassador, we need to:
Install Ambassador.
First you will need to deploy the Ambassador ambassador-admin service to your cluster:
It's simplest to use the YAML files we have online for this (though of course you can download them and use them locally if you prefer!).
First, you need to check if Kubernetes has RBAC enabled:
kubectl cluster-info dump --namespace kube-system | grep authorization-mode
If you see something like --authorization-mode=Node,RBAC in the output, then RBAC is enabled.
If RBAC is enabled, you'll need to use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
Without RBAC, you can use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml
(Note that if you are planning to use mutual TLS for communication between Ambassador and Istio/services in the future, then the order in which you deploy the ambassador-admin service and the ambassador LoadBalancer service below may need to be swapped)
Next you will deploy an ambassador service that acts as a point of ingress into the cluster via the LoadBalancer type. Create the following YAML and put it in a file called ambassador-service.yaml.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: httpbin
spec:
prefix: /httpbin/
service: httpbin.org
host_rewrite: httpbin.org
Then, apply it to the Kubernetes with kubectl:
kubectl apply -f ambassador-service.yaml
The YAML above does several things:
It creates a Kubernetes service for Ambassador, of type LoadBalancer. Note that if you're not deploying in an environment where LoadBalancer is a supported type (i.e. MiniKube), you'll need to change this to a different type of service, e.g., NodePort.
It creates a test route that will route traffic from /httpbin/ to the public httpbin.org HTTP Request and Response service (which provides useful endpoint that can be used for diagnostic purposes). In Ambassador, Kubernetes annotations (as shown above) are used for configuration. More commonly, you'll want to configure routes as part of your service deployment process, as shown in this more advanced example.
You can see if the two Ambassador services are running correctly (and also obtain the LoadBalancer IP address when this is assigned after a few minutes) by executing the following commands:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.63.247.1 35.224.41.XX 8080:32171/TCP 11m
ambassador-admin NodePort 10.63.250.17 <none> 8877:32107/TCP 12m
details ClusterIP 10.63.241.224 <none> 9080/TCP 16m
kubernetes ClusterIP 10.63.240.1 <none> 443/TCP 24m
productpage ClusterIP 10.63.248.184 <none> 9080/TCP 16m
ratings ClusterIP 10.63.255.72 <none> 9080/TCP 16m
reviews ClusterIP 10.63.252.192 <none> 9080/TCP 16m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-2680035017-092rk 2/2 Running 0 13m
ambassador-2680035017-9mr97 2/2 Running 0 13m
ambassador-2680035017-thcpr 2/2 Running 0 13m
details-v1-3842766915-3bjwx 2/2 Running 0 17m
productpage-v1-449428215-dwf44 2/2 Running 0 16m
ratings-v1-555398331-80zts 2/2 Running 0 17m
reviews-v1-217127373-s3d91 2/2 Running 0 17m
reviews-v2-2104781143-2nxqf 2/2 Running 0 16m
reviews-v3-3240307257-xl1l6 2/2 Running 0 16m
Above we see that external IP assigned to our LoadBalancer is 35.224.41.XX (XX is used to mask the actual value), and that all ambassador pods are running (Ambassador relies on Kubernetes to provide high availability, and so there should be two small pods running on each node within the cluster).
You can test if Ambassador has been installed correctly by using the test route to httpbin.org to get the external cluster Origin IP from which the request was made:
$ curl 35.224.41.XX/httpbin/ip
{
"origin": "35.192.109.XX"
}
If you're seeing a similar response, then everything is working great!
(Bonus: If you want to use a little bit of awk magic to export the LoadBalancer IP to a variable AMBASSADOR_IP, then you can type export AMBASSADOR_IP=$(kubectl get services ambassador | tail -1 | awk '{ print $4 }')and usecurl $AMBASSADOR_IP/httpbin/ip
Now you are going to modify the bookinfo demo bookinfo.yaml manifest to include the necessary Ambassador annotations. See below.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: productpage
spec:
prefix: /productpage/
rewrite: /productpage
service: productpage:9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
The annotation above implements an Ambassador mapping from the '/productpage/' URI to the Kubernetes productpage service running on port 9080 ('productpage:9080'). The 'prefix' mapping URI is taken from the context of the root of your Ambassador service that is acting as the ingress point (exposed externally via port 80 because it is a LoadBalancer) e.g. '35.224.41.XX/productpage/'.
You can now apply this manifest from the root of the Istio GitHub repo on your local file system (taking care to wrap the apply with istioctl kube-inject):
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
Optionally, delete the Ingress controller from the bookinfo.yaml manifest by typing kubectl delete ingress gateway.
Test Ambassador by going to the IP of the Ambassador LoadBalancer you configured above e.g. 35.192.109.XX/productpage/. You can see the actual IP address again for Ambassador by typing kubectl get services ambassador.
Also according to documentation there is no need for Ambassador pods to be injected.
Yes, I have already configured all these things. That's why I have mentioned it in the attached image. I have taken this from kiali dashboard. That output I have shared of the bookinfo application. I have deployed my own application and its also working fine.
But I want short out this unknown thing.
I am using the AWS EKS cluster.
Putting note about ambassador:
Ambassador should not have the Istio sidecar for two reasons. First, it cannot since running the two separate Envoy instances will result in a conflict over their shared memory segment. The second is Ambassador should not be in your mesh anyway. The mesh is great for handling traffic routing from service to service, but since Ambassador is your ingress point, it should be solely in charge of deciding which service to route to and how to do it. Having both Ambassador and Istio try to set routing rules would be a headache and wouldn't make much sense.
All the traffic coming from a source that is not part of the service mesh is going to be shown as unknown.
See what kiali says about the unknowns:
https://kiali.io/faq/graph/#many-unknown