Edit applied resource configuration with kubectl apply -k - amazon-web-services

I'm applying aws-efs-csi driver like this on a kubernates cluster:
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.0"
I need to edit the configuration file to add credentials for pulling docker images.
I couldn't find ways to edit via kubectl edit ..
This is the pod in the kube-system namespace:
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system efs-csi-node-xxssqr 3/3 Running 0 69d
...

It’s a daemonset.
kubectl -n kube-system edit ds/efs-csi-node

Related

aws-node pod is missing in kube-system namespace

Im deploying EKS cluster and configuring the managed node groups so that we can have master and worker nodes .
following this doc :
https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
while running this command :
kubectl get pods -n kube-system -l k8s-app=aws-node
I dont see any POD with that label . dont know why ?
Is it something due to missing configuration OR I missed something while deploying EKS cluster
please suggest
UPDATE 1
kubectl describe daemonset aws-node -n kube-system
output
Name: aws-node Selector: k8s-app=aws-node Node-Selector: <none> Labels: app.kubernetes.io/instance=aws-vpc-cni
app.kubernetes.io/name=aws-node
app.kubernetes.io/version=v1.11.4
k8s-app=aws-node Annotations: deprecated.daemonset.template.generation: 2 Desired Number of Nodes Scheduled: 0 Current Number of Nodes Scheduled: 0 Number of Nodes Scheduled with Up-to-date Pods: 0 Number of Nodes Scheduled with Available Pods: 0 Number of Nodes Misscheduled: 0 Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=aws-vpc-cni
app.kubernetes.io/name=aws-node
k8s-app=aws-node Service Account: aws-node
kubectl get nodes command says No resources found
No pod will be running if you don't have any worker node. Easiest way to add worker node is on the AWS console, goto Amazon Elastic Kubernetes Service and click on your cluster, goto "Compute" tab and select the node group, click "Edit" and change "Desired size" to > 1.

Unable to view kiali dashboard

I installed Istion version 1.6.9 with below steps
Install Istio Version 1.6.9
wget https://github.com/istio/istio/releases/download/1.6.9/istio-1.6.9-linux-amd64.tar.gz
tar -xzvf istio-1.6.9-linux-amd64.tar.gz
cd istio-1.6.9
cd bin/
sudo mv istioctl /usr/local/bin/
istioctl --version
istioctl install --set profile=demo
I want to access kiali dashboard but I am unable to figure out how to access!
I can see kiali is running in pod:
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-5dc4b4676c-wcb59 1/1 Running 0 32h
istio-egressgateway-5889bb8976-stlqd 1/1 Running 0 32h
istio-ingressgateway-699d97bdbf-w6x46 1/1 Running 0 32h
istio-tracing-8584b4d7f9-p66wh 1/1 Running 0 32h
istiod-86d4497c9-xv2km 1/1 Running 0 32h
kiali-6f457f5964-6sssn 1/1 Running 0 32h
prometheus-5d64cf8b79-2kdww 2/2 Running 0 32h
I am able to see the kiali as services as well:
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.100.101.71 <none> 3000/TCP 32h
istio-egressgateway ClusterIP 10.100.34.75 <none> 80/TCP,443/TCP,15443/TCP 32h
istio-ingressgateway LoadBalancer 10.100.84.203 a736b038af6b5478087f0682ddb4dbbb-1317589033.ap-southeast-2.elb.amazonaws.com 15021:31918/TCP,80:32736/TCP,443:30611/TCP,31400:30637/TCP,15443:31579/TCP 32h
istiod ClusterIP 10.100.111.159 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 32h
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 32h
jaeger-collector ClusterIP 10.100.84.202 <none> 14267/TCP,14268/TCP,14250/TCP 32h
jaeger-collector-headless ClusterIP None <none> 14250/TCP 32h
jaeger-query ClusterIP 10.100.165.216 <none> 16686/TCP 32h
kiali ClusterIP 10.100.159.127 <none> 20001/TCP 32h
prometheus ClusterIP 10.100.113.255 <none> 9090/TCP 32h
tracing ClusterIP 10.100.77.39 <none> 80/TCP 32h
zipkin ClusterIP 10.100.247.201 <none> 9411/TCP
I also can see secret is also deployed as below:
kubectl get secrets
NAME TYPE DATA AGE
default-token-ghz6r kubernetes.io/service-account-token 3 8d
sh.helm.release.v1.aws-efs-csi-driver.v1 helm.sh/release.v1 1 6d
[centos#ip-10-0-0-61 ~]$ kubectl get secrets -n istio-system
NAME TYPE DATA AGE
default-token-z6t2v kubernetes.io/service-account-token 3 32h
istio-ca-secret istio.io/ca-root 5 32h
istio-egressgateway-service-account-token-c8hfp kubernetes.io/service-account-token 3 32h
istio-ingressgateway-service-account-token-fx65w kubernetes.io/service-account-token 3 32h
istio-reader-service-account-token-hxsll kubernetes.io/service-account-token 3 32h
istiod-service-account-token-zmtsv kubernetes.io/service-account-token 3 32h
kiali Opaque 2 32h
kiali-service-account-token-82gk7 kubernetes.io/service-account-token 3 32h
prometheus-token-vs4f6 kubernetes.io/service-account-token 3 32h
I ran all of the above commands on my Linux bastion host, I am hoping that if I open port 20001 on my Linux bastion as well as SG I should be able to access it admin/admin credentials? as like below:
http://10.100.159.127:20001/
My second question is ISTIO as the software is running on my Linux Bastion Server or on my EKS CLuster?
My feeling is it is running on the local Bastion Server, but since we used the below commands
kubectl label ns default istio-injection=enabled
kubectl get ns
kubectl label ns jenkins istio-injection=enabled
kubectl label ns spinnaker istio-injection=enabled
Any pods running in these namespaces will have Envoy proxy pod injected automatically, correct?
P.S: I did the below:
nohup istioctl dashboard kiali &
Opened port at the SG level and at the OS level too... still not able to access the Kiali dashboard
http://3.25.217.61:40235/kiali
[centos#ip-10-0-0-61 ~]$ wget http://3.25.217.61:40235/kiali
--2020-09-11 15:56:18-- http://3.25.217.61:40235/kiali
Connecting to 3.25.217.61:40235... failed: Connection refused.
curl ifconfig.co
3.25.217.61
sudo netstat -nap|grep 40235
tcp 0 0 127.0.0.1:40235 0.0.0.0:* LISTEN 29654/istioctl
tcp6 0 0 ::1:40235 :::* LISTEN 29654/istioctl
Truly, unable to understand what is going wrong...
Just run istioctl dashboard kiali.
Istioctl will create a proxy. Now log in with admin/admin credentials.
To answer the second question:
Istio is running on your cluster and is configure with istioctl, installed on your bastion.
By labeling a namespace with istio-injection=enabled the sidecar will be injected automatically. If necessary, you can disable the injection for a pod by annotating it like this:
spec:
selector:
matchLabels:
...
template:
metadata:
labels:
...
annotations:
sidecar.istio.io/inject: "false"
Update
To access kiali without istioctl/kubectl proxy, you have three options. As you found correctly, it depends on the kiali service type:
ClusterIP (default)
To use the default, set up a route from gateway to kiali service. This is done using VirtualService and DestinationRule. You can than access kiali by eg <ingress-gateway-loadbalancer-id>.amazonaws.com/kiali
NodePort
You can change type to NodePort by setting the corresponding value on istio installation and access kiali by <ingress-gateway-loadbalancer-id>.amazonaws.com:20001/kiali``
LoadBalancer
You can change type to LoadBalancer by setting the corresponding value on istio installation. A second elastic load balancer will be created on aws and the kiali service will have an external ip, like the ingressgateway service does. You can now access it by <kiali-loadbalancer-id>.amazonaws.com/kiali
I would recommend option 1. It's best practice for production and you don't have to dig to deep into istio installation config, which can be overwhelming in the beginning.
Check the port and its type for kiali service by following command.
kubectl get svc -n istio-system
If the type is NodePort then you can check localhost:(port of kiali service) otherwise if the type is clusterIP then you have to expose it by forwarding it.
Expose Kiali either via Kubernetes port forwarding or via a gateway. The following forwarding command exposes Kiali on localhost, port 20001:
kubectl -n istio-system port-forward svc/kiali 20001:20001 &
Then check localhost:20001 for kiali dashboard.
using Kubernetes: https://{domain or ingress ip}/kiali
kubectl get ingress kiali -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Or (for any kind of platform)
oc port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward $(kubectl get pod -n istio-system -l app=kiali -o jsonpath='{.items[0].metadata.name}') -n istio-system 20001

ALB Ingress Controller on AWS

I'm trying to setup an ALB Ingress Controller on AWS-EKS, exactly as the following tutorial describe: ingress_controller_alb, but I cannot get an ingress address.
Indeed, if I run the following command: kubectl get ingress/2048-ingress -n 2048-game, after 10 minutes I get no address. Any idea?
Problem may be in version of aws-controller you are using - you are using old version of ingress controller - 1.0.0, new one is 1.1.3.
I advice you to take look at this documentation: ingress-controller-alb.
1. Download sample ALB ingress controller manifest
wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/alb-ingress-controller.yaml
2. Configure the ALB ingress controller manifest
At minimum, edit the following variables:
--cluster-name=devCluster: name of the cluster. AWS resources will be tagged with kubernetes.io/cluster/devCluster:owned
If ec2metadata is unavailable from the controller pod, edit the following variables:
--aws-vpc-id=vpc-xxxxxx: vpc ID of the cluster.
--aws-region=us-west-1: AWS region of the cluster.
3. Deploy the RBAC roles manifest
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/rbac-role.yaml
4. Deploy the ALB ingress controller manifest
kubectl apply -f alb-ingress-controller.yaml
5. Verify the deployment was successful and the controller started
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
You should be able to display output similar to the following:
-------------------------------------------------------------------------------
AWS ALB Ingress controller
Release: 1.0.0
Build: git-7bc1850b
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------
Then you can deploy sample application
Execute following commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-service.yaml
Deploy an Ingress resource for the 2048 game:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.3/docs/examples/2048/2048-ingress.yaml
After few seconds, verify that the Ingress resource is enabled:
kubectl get ingress/2048-ingress -n 2048-game
I was struggling with the same issue, but finally got it working after following #MaggieO steps above. A couple of things to consider:
Add public and private subnets to your EKS cluster. Make sure your public subnets are tagged with "kubernetes.io/role/elb":"1". If creating a managed node group, only select private subnets for placement of your worker nodes.
Make sure your IAM role for you worker nodes has the policies AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy, and the custom policy defined here https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.2/docs/examples/iam-policy.json.
Examine your ingress controller logs, they are helpful.
kubectl logs -n kube-system [name of your ingress controller]
Thank you for your replies!
I think the problem is the cluster creation that results in cluster creation without EC2 instances, with the command eksctl cluster create -f cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test
region: eu-central-1
version: "1.14"
vpc:
id: vpc-50b17738
subnets:
private:
eu-central-1a: { id: subnet-aee763c6 }
eu-central-1b: { id: subnet-bc2ee6c6 }
eu-central-1c: { id: subnet-24734d6e }
nodeGroups:
- name: ng-1-workers
labels: { role: workers }
instanceType: t3.medium
desiredCapacity: 2
volumeSize: 5
privateNetworking: true
I try with node groups and with managed node groups, but I get the following timeout error:
...
[ℹ] nodegroup "ng-1-workers" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "ng-1-workers"
Error: timed out (after 25m0s) waiting for at least 2 nodes to join the cluster and become ready in "ng-1-workers"
if you succeed to create contoller,you will find this controller:
$ kubectl get po -n kube-system | grep alb
alb-ingress-controller-669b958f64-p69fw 1/1 Running 0 3m7s
and its logs :
$ kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o alb-ingress[a-zA-Z0-9-]+)
-------------------------------------------------------------------------------
AWS ALB Ingress controller
Release: v1.1.8
Build: git-ec387ad1
Repository: https://github.com/kubernetes-sigs/aws-alb-ingress-controller.git
-------------------------------------------------------------------------------
W0720 13:31:21.242868 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.

How to resolve Failed to start service controller: WARNING: no cloud provider provided

Background:
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.108.245.210 <pending> 80:30742/TCP,443:31028/TCP 41m
$ kubectl cluster-info dump | grep LoadBalancer
14:35:47.072444 1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
k8s cluster is up and running fine. -
$ ls /etc/kubernetes/manifests
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
ingress-nginx default-http-backend ClusterIP 10.100.2.163 <none> 80/TCP 21h
ingress-nginx ingress-nginx LoadBalancer 10.108.221.18 <pending> 80:32010/TCP,443:31271/TCP 18h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 21h
How do I link the cloud provider to kubernetes cluster in the existing setup?
I would expect grep -r cloud-provider= /etc/kubernetes/manifests to either show you where the flag is being explicitly set to --cloud-provider= (that is, the empty value), or let you know that there is no such flag, in which case you'll need(?) to add them in three places:
kube-apiserver.yaml
kube-cloud-provider.yaml
in kubelet.service or however you are currently running kubelet
I said "need(?)" because I thought that I read one upon a time that the kubernetes components were good enough at auto-detecting their cloud environment, and thus those flags were only required if you needed to improve or alter the default behavior. However, I just checked the v1.13 page and there doesn't seem to be any "optional" about it. They've even gone so far as to now make --cloud-config= seemingly mandatory, too

skydns not able to find nginxsvc

I am following the example in here: http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables. Although the dns seems to be enabled:
skwok-wpc-3:1.0 skwok$ kubectl get services kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
and the service is up
$ kubectl get svc
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
nginxsvc app=nginx app=nginx 10.0.128.194 80/TCP
Following the example, I cannot use the curlpod to look up the service:
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.us-west-2.compute.internal
nslookup: can't resolve 'nginxsvc'
Did I miss anything? I am using aws and I use export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start my cluster. Thank you.
Please see: http://kubernetes.io/v1.0/docs/user-guide/debugging-services.html, and make sure nginx is running and serving within your pod. I would also suggest something like:
$ kubectl get ep nginxsvc
$ kubectl exec -it curlpod /bin/sh
pod$ curl ip-from-kubectl-get-ep
pod$ traceroute ip-from-kubectl-get-ep
If that doesn't work, please reply or jump on the Kubernetes slack channel