How to deploy older ingress-nginx-controller or specify version with minikube? - kubectl

I am trying to deploy a specific version of ingress-controller with minikube and kubernetesv1.13, but from what I see it is only possible to have latest version of ingress-nginx-controller deployed.
I expect the ingress-nginx-controller-#####-#### pod to come back online and run with the nginx-ingress image version I point to in the deployments details.
After editing the ingress-nginx-controller deployment via kubectl edit and changing the image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller property from 0.32.0 to 0.24.1, the pod restarts and goes into CrashLoopBackOff state.
By hitting describe, the pod seems complaining about the node not having free ports:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m8s (x2 over 5m8s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Normal Scheduled 4m54s default-scheduler Successfully assigned kube-system/ingress-nginx-controller-6c4b64d58c-s5ddz to minikube
After searching for a similar case I tried the following:
I check ss but see no port 80 or 443 being busy on the host:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 32 192.168.122.1:53 0.0.0.0:*
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 5 127.0.0.1:631 0.0.0.0:*
LISTEN 0 5 [::1]:631 [::]:*
No pods seems to be in terminating status:
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-7s55r 1/1 Running 1 3h14m
coredns-86c58d9df4-rtssn 1/1 Running 1 3h14m
etcd-minikube 1/1 Running 1 3h13m
ingress-nginx-admission-create-gpfml 0/1 Completed 0 47m
ingress-nginx-admission-patch-z96hd 0/1 Completed 0 47m
ingress-nginx-controller-6c4b64d58c-s5ddz 0/1 CrashLoopBackOff 9 24m
kube-apiserver-minikube 1/1 Running 0 145m
kube-controller-manager-minikube 1/1 Running 0 145m
kube-proxy-pmwxr 1/1 Running 0 144m
kube-scheduler-minikube 1/1 Running 0 145m
storage-provisioner 1/1 Running 2 3h14m
I did not create any yml file or custom deployment, just installed minikube and enabled the ingress addon.
How to use a different nginx-ingress-controller version ?

The Nginx Version is tied to minikube version.
First I tried previous versions. Unfortunatelly, the available Minikube v1.3 uses nginx 0.25.0 and Minikube v1.2 uses nginx 0.23.0
So the only way I found to run nginx 0.24.0 in Minikube was building the binary myself using minikube v1.4, here is the step-by-step:
Download the minikube 1.4 repository and extract it:
$ wget https://github.com/kubernetes/minikube/archive/v1.4.0.tar.gz
$ tar -xvzf v1.4.0.tar.gz
Then, cd into the newly created minikube-1.4.0 folder and edit the file deploy/addons/ingress/ingress-dp.yaml.tmpl changing the image version to 0.24.1 as below:
spec:
serviceAccountName: ingress-nginx
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
In order to build you have to download a Go distribution from original repo: https://golang.org/dl/
then follow the steps in https://golang.org/doc/install to install it. If you are running linux 64 bits, you can use the bellow comments:
$ wget https://dl.google.com/go/go1.14.4.linux-amd64.tar.gz
$ sudo tar -C /usr/local -xzf go1.14.4.linux-amd64.tar.gz
$ export PATH=$PATH:/usr/local/go/bin
Then from the Minikube 1.4.0 folder, run make:
/minikube-1.4.0$ ls
CHANGELOG.md CONTRIBUTING.md go.mod images Makefile OWNERS SECURITY_CONTACTS test.sh
cmd deploy go.sum installers netlify.toml pkg site third_party
code-of-conduct.md docs hack LICENSE README.md test translations
/minikube-1.4.0$ make
It may take a few minutes to download all dependencies, then let's copy the freshly build binary to /usr/local/bin and deploy minikube:
/minikube-1.4.0$ cd out/
/minikube-1.4.0/out$ ls
minikube minikube-linux-amd64
$ sudo cp minikube-linux-amd64 /usr/local/bin/minikube
$ minikube version
minikube version: v1.4.0
$ minikube start --vm-driver=kvm2 --kubernetes-version 1.13.12
NOTE: if you get an error about kvm2 driver when starting minikube, run the following command:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 && sudo install docker-machine-driver-kvm2 /usr/local/bin/
This version comes with ingress enabled by default, let's check the deployment status:
$ minikube addons list | grep ingress
- ingress: enabled
$ kubectl describe deploy nginx-ingress-controller -n kube-system |
grep Image:
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-54ff9cd656-d95w5 1/1 Running 0 2m14s
coredns-54ff9cd656-tnvnw 1/1 Running 0 2m14s
etcd-minikube 1/1 Running 0 78s
kube-addon-manager-minikube 1/1 Running 0 71s
kube-apiserver-minikube 1/1 Running 0 71s
kube-controller-manager-minikube 1/1 Running 0 78s
kube-proxy-wj2d6 1/1 Running 0 2m14s
kube-scheduler-minikube 1/1 Running 0 87s
nginx-ingress-controller-f98c6df-5h2l7 1/1 Running 0 2m9s
storage-provisioner 1/1 Running 0 2m8s
As you can see, the pod nginx-ingress-controller-f98c6df-5h2l7 is in running state.
If you have any question let me know in the comments.

Related

Istio 1.4.7 - Kiali pod fails to start

After installing the Istio 1.4.7, Kiali pod is not coming up cleanly. Its failing with error - signing key for login tokens is invalid
kubectl get po -n istio-system | gre kiali
NAME READY STATUS RESTARTS AGE
kiali-7ff568c949-v2qmq 0/1 CrashLoopBackOff 56 4h22m
kubectl describe po kiali-7ff568c949-v2qmq -n istio-system
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned istio-system/kiali-774d68d9c7-4trpd to ip-10-75-64-5.eu-west-2.compute.internal
Normal Pulling 28s kubelet, ip-10-75-64-5.eu-west-2.compute.internal Pulling image "quay.io/kiali/kiali:v1.15.2"
Normal Pulled 27s kubelet, ip-10-75-64-5.eu-west-2.compute.internal Successfully pulled image "quay.io/kiali/kiali:v1.15.2"
Normal Created 12s (x3 over 27s) kubelet, ip-10-75-64-5.eu-west-2.compute.internal Created container kiali
Normal Pulled 12s (x2 over 26s) kubelet, ip-10-75-64-5.eu-west-2.compute.internal Container image "quay.io/kiali/kiali:v1.15.2" already present on machine
Normal Started 11s (x3 over 26s) kubelet, ip-10-75-64-5.eu-west-2.compute.internal Started container kiali
Warning BackOff 5s (x5 over 25s) kubelet, ip-10-75-64-5.eu-west-2.compute.internal Back-off restarting failed container
kubectl logs -n istio-system kiali-7ff568c949-v2qmq
I0429 21:23:11.024691 1 kiali.go:66] Kiali: Version: v1.15.2, Commit: 718aedca76e612e2f95498d022fab1e116613792
I0429 21:23:11.025039 1 kiali.go:205] Using authentication strategy [login]
F0429 21:23:11.025057 1 kiali.go:83] signing key for login tokens is invalid
As #Joel mentioned in comments
see this issue and in particular this comment
and mentioned here
Istio 1.4.7 release does not contain ISTIO-SECURITY-2020-004 fix
The release notes for Istio 1.4.7 state that the security vulnerability relating to Kiali has been fixed; however, the commit to fix this is not present in the release.
As far as I understand from this comment if you use istioctl it should work.
The istioctl installer was fixed.
but
If you installed with the old helm charts, then it wasn't fixed there. I thought the helm charts were deprecated. In any event, add these two lines to the kiali configmap template in the helm chart:
login_token:
signing_key: {{ randAlphaNum 10 | quote }}
If that won't work I would recommend to upgrade to istio version 1.5.1, it should fix the issue.

get k8s pods from a node with regex pattern match in namespace name

Team,
I am able to fetch all pods running on a node with its namespace but my namespaces are generated dynamically and they change with characters in end. is there a way i can include a regex/pattern that I can use in kubectl command to pull all pods from all matching namespace?
kubectl get pods -n team-1-user1 --field-selector=spec.nodeName=node1,status.phase=Running
actual output1: works
NAMESPACE NAME READY STATUS RESTARTS AGE
team-1-user1 calico-node-9j5k2 1/1 Running 2 104d
team-1-user1 kube-proxy-ht7ch 1/1 Running 2 130d
I want below pulling pods for all namespaces starting with "team-".
kubectl get pods -n team-* --field-selector=spec.nodeName=node1,status.phase=Running
actual output2: fails
No resources found in team-workflow-2134-asf-324-d.yaml namespace.
expected outout: want this..
NAMESPACE NAME READY STATUS RESTARTS AGE
team-1-user1 calico-node-9j5k2 1/1 Running 2 104d
team-1-user1 kube-proxy-ht7ch 1/1 Running 2 130d
team-2-user1 calico-node-9j5k2 1/1 Running 2 1d
team-2-user1 kube-proxy-ht7ch 1/1 Running 2 10d
You can pipe the output of kubectl get pods into awk and match a regex for the same:
kubectl get pods --all-namespaces --no-headers | awk '{if ($1 ~ "team-") print $0}'
Here's a sample output for the same, searching for pods in kube- namespace:
❯❯❯ kubectl get pods --all-namespaces --no-headers | awk '{if ($1 ~ "kube-") print $0}'
kube-system coredns-6955765f44-27wxs 1/1 Running 0 107s
kube-system coredns-6955765f44-ztgq8 1/1 Running 0 106s
kube-system etcd-minikube 1/1 Running 0 109s
kube-system kube-addon-manager-minikube 1/1 Running 0 108s

Run Elasticsearch on AWS EC2 with Docker

I'm trying to run Elasticsearch with Docker on an AWS EC2 instance, but when it runs, after a few seconds will be stopped, any of you have any experiences what the problem could be?
This is my Elasticsearch config in the docker-compose.yaml:
elasticsearch:
build:
context: ./elasticsearch
args:
- ELK_VERSION=${ELK_VERSION}
volumes:
- elasticsearch:/usr/share/elasticsearch/data
environment:
- cluster.name=laradock-cluster
- node.name=laradock-node
- bootstrap.memory_lock=true
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms7g -Xmx7g"
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- cluster.initial_master_nodes=laradock-node
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
ports:
- "${ELASTICSEARCH_HOST_HTTP_PORT}:9200"
- "${ELASTICSEARCH_HOST_TRANSPORT_PORT}:9300"
depends_on:
- php-fpm
networks:
- frontend
- backend
And This is my Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.1
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
EXPOSE 9200 9300
Also, I did sysctl -w vm.max_map_count=655360 on my AWS EC2 instance
Notice: my AWS EC2 instance is Ubuntu 18.4
Thanks
I am not sure about your docker-compose.yaml as you are not referring this in your dockerfile, But I am able to reproduce the issue. I launched same ubuntu 18.4 in my AWS account and used your dockerfile to launch a ES docker container using below commands:
docker build --tag=elasticsearch-custom .
docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
And my docker container was also stopping just after starting up as shown below:
ubuntu#ip-172-31-32-95:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
03cde4a19389 elasticsearch-custom "/usr/local/bin/dock…" 33 seconds ago Exited (78) 6 seconds ago mystifying_napier
When checked the logs on console, when starting the docker, I found below error:
ERROR: [1] bootstrap checks failed [1]: the default discovery settings
are unsuitable for production use; at least one of
[discovery.seed_hosts, discovery.seed_providers,
cluster.initial_master_nodes] must be configured
Which is very well known error and can be easily resolved just by adding -e "discovery.type=single-node" to docker run command. After adding this in docker run command as below:
docker run -e "discovery.type=single-node" -ti -v /usr/share/elasticsearch/data elasticsearch-custom
its running fine as shown below:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
191fc3dceb5a elasticsearch-custom "/usr/local/bin/dock…" 8 minutes ago Up 8 minutes 9200/tcp, 9300/tcp recursing_elgamal

Gunicorn issues on gcloud. Memory faults and restarts thread

I am deploying a django application to gcloud using gunicorn without nginx.
Running the container locally works fine, the application boots and does a memory consuming job on startup in its own thread (building a cache). Approx. 900 MB of memory is used after the job is finished.
Gunicorn is started with:CMD gunicorn -b 0.0.0.0:8080 app.wsgi:application -k eventlet --workers=1 --threads=4 --timeout 1200 --log-file /gunicorn.log --log-level debug --capture-output --worker-tmp-dir /dev/shm
Now I want to deploy this to gcloud. Creating a running container with the following manifest:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
namespace: default
spec:
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- image: gcr.io/app-numbers/app:latest
imagePullPolicy: Always
resources:
limits:
memory: "2Gi"
requests:
memory: "2Gi"
name: app
ports:
- containerPort: 8080
protocol: TCP
Giving the container 2 GB of memory.
Looking at the logs, guniucorn is booting workers [2019-09-01 11:37:48 +0200] [17] [INFO] Booting worker with pid: 17
Using free -m in the container shows the memory slowly being consumed and dmesg shows:
[497886.626932] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[497886.636597] [1452813] 0 1452813 256 1 4 2 0 -998 pause
[497886.646332] [1452977] 0 1452977 597 175 5 3 0 447 sh
[497886.656064] [1452989] 0 1452989 10195 7426 23 4 0 447 gunicorn
[497886.666376] [1453133] 0 1453133 597 360 5 3 0 447 sh
[497886.676959] [1458304] 0 1458304 543235 520309 1034 6 0 447 gunicorn
[497886.686727] Memory cgroup out of memory: Kill process 1458304 (gunicorn) score 1441 or sacrifice child
[497886.697411] Killed process 1458304 (gunicorn) total-vm:2172940kB, anon-rss:2075432kB, file-rss:5804kB, shmem-rss:0kB
[497886.858875] oom_reaper: reaped process 1458304 (gunicorn), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
What could be going on creating a memory leak on gcloud and not locally?

Minikube with Virtualbox or KVM using lots of CPU on Centos 7

I've installed minikube as per the kubernetes instructions.
After starting it, and waiting a while, I noticed that it is using a lot of CPU, even though I have nothing particular running in it.
top shows this:
%Cpu(s): 0.3 us, 7.1 sy, 0.5 ni, 92.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32521856 total, 2259992 free, 9882020 used, 20379844 buff/cache
KiB Swap: 2097144 total, 616108 free, 1481036 used. 20583844 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4847 root 20 0 3741112 91216 37492 S 52.5 0.3 9:57.15 VBoxHeadless
lscpu shows this:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 21
Model: 2
Model name: AMD Opteron(tm) Processor 3365
I see the same effect if I use KVM instead of VirtualBox
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
I installed metrics-server and it outputs this:
kubectl top node minikube
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 334m 16% 1378Mi 76%
kubectl top pods --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default hello-minikube-56cdb79778-rkdc2 0m 3Mi
kafka-data-consistency zookeeper-84fb4cd6f6-sg7rf 1m 36Mi
kube-system coredns-fb8b8dccf-2nrl4 4m 15Mi
kube-system coredns-fb8b8dccf-g6llp 4m 8Mi
kube-system etcd-minikube 38m 41Mi
kube-system kube-addon-manager-minikube 31m 6Mi
kube-system kube-apiserver-minikube 59m 186Mi
kube-system kube-controller-manager-minikube 22m 41Mi
kube-system kube-proxy-m2fdb 2m 17Mi
kube-system kube-scheduler-minikube 2m 11Mi
kube-system kubernetes-dashboard-79dd6bfc48-7l887 1m 25Mi
kube-system metrics-server-cfb4b47f6-q64fb 2m 13Mi
kube-system storage-provisioner 0m 23Mi
Questions:
1) is it possible to find out why it is using so much CPU? (note that I am generating no load, and none of my containers are processing any data)
2) is that normal?
Are you sure nothing is running? What happens if you type kubectl get pods --all-namespaces? By default Kubernetes only displays the pods that are inside the default namespace (thus excluding the pods inside the system namespace).
Also, while I am no CPU expert, this seems like a reasonable consumption for the hardware you have.
In response to question 1):
You can ssh into minikube and from there you can run top to see the processes which are running:
minikube ssh
top
There is a lot of docker and kublet stuff running:
top - 21:43:10 up 8:27, 1 user, load average: 10.98, 12.00, 11.46
Tasks: 148 total, 1 running, 147 sleeping, 0 stopped, 0 zombie
%Cpu0 : 15.7/15.7 31[|||||||||||||||||||||||||||||||| ]
%Cpu1 : 6.0/10.0 16[|||||||||||||||| ]
GiB Mem : 92.2/1.9 [ ]
GiB Swap: 0.0/0.0 [ ]
11842 docker 20 0 24.5m 3.1m 0.7 0.2 0:00.71 R `- top
1948 root 20 0 480.2m 77.0m 8.6 4.1 27:45.44 S `- /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca+
...
3176 root 20 0 10.1g 48.4m 2.0 2.6 17:45.61 S `- etcd --advertise-client-urls=https://192.168.39.197:2379 --cert-file=/var/lib/minikube/certs/etc+
The two process with 27 and 17 hours of processor time are the culprits.
In response to question 2): No idea but could be. See answer from #alassane-ndiaye