My service is up and running in docker platform. When I list docker images in my instance, I see the following images including unnamed.
> docker images -aq
REPOSITORY TAG IMAGE ID CREATED SIZE
current_nginx latest a3e8b5ae7751 48 minutes ago 23.5MB
<none> <none> 320523ba019f 48 minutes ago 1.52GB
current_nuxt2 latest 5b744629e956 48 minutes ago 1.52GB
<none> <none> ff41ec719b95 48 minutes ago 1.52GB
<none> <none> 5ed2e390be4e 50 minutes ago 1.5GB
<none> <none> a2c4118d3f34 52 minutes ago 643MB
<none> <none> 0ee04fa44ba2 54 minutes ago 23.5MB
<none> <none> 266e831b549f About an hour ago 406MB
<none> <none> 7e3618333973 About an hour ago 406MB
<none> <none> f071bfd32b76 About an hour ago 406MB
node 16-alpine 610c0494e820 7 days ago 118MB
nginx 1.23.2-alpine 19dd4d73108a 5 weeks ago 23.5MB
Can someone please explain why I got these unnamed images? I only run 2 containers.
Plus I've already tried to remove one of unnamed images, and I got this error.
> docker rmi 320523ba019f
Error response from daemon: conflict: unable to delete 320523ba019f (cannot be forced) - image has dependent child images
Related
I'm trying to deploy a simple REST API written in Golang to AWS EKS.
I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.
All resources in the cluster look like:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/aws-load-balancer-controller-5947f7c854-fgwk2 1/1 Running 0 75m
kube-system pod/aws-load-balancer-controller-5947f7c854-gkttb 1/1 Running 0 75m
kube-system pod/aws-node-dfc7r 1/1 Running 0 120m
kube-system pod/aws-node-hpn4z 1/1 Running 0 120m
kube-system pod/aws-node-s6mng 1/1 Running 0 120m
kube-system pod/coredns-66cb55d4f4-5l7vm 1/1 Running 0 127m
kube-system pod/coredns-66cb55d4f4-frk6p 1/1 Running 0 127m
kube-system pod/kube-proxy-6ndf5 1/1 Running 0 120m
kube-system pod/kube-proxy-s95qk 1/1 Running 0 120m
kube-system pod/kube-proxy-vdrdd 1/1 Running 0 120m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 127m
kube-system service/aws-load-balancer-webhook-service ClusterIP 10.100.202.90 <none> 443/TCP 75m
kube-system service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 127m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 127m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 127m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/aws-load-balancer-controller 2/2 2 2 75m
kube-system deployment.apps/coredns 2/2 2 2 127m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/aws-load-balancer-controller-5947f7c854 2 2 2 75m
kube-system replicaset.apps/coredns-66cb55d4f4 2 2 2 127m
I can run the application locally with Go and with Docker. But releasing this on AWS EKS always throws CrashLoopBackOff.
Running kubectl describe pod PODNAME shows:
Name: go-api-55d74b9546-dkk9g
Namespace: default
Priority: 0
Node: ip-172-16-1-191.ec2.internal/172.16.1.191
Start Time: Tue, 15 Mar 2022 07:04:08 -0700
Labels: app=go-api
pod-template-hash=55d74b9546
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 172.16.1.195
IPs:
IP: 172.16.1.195
Controlled By: ReplicaSet/go-api-55d74b9546
Containers:
go-api:
Container ID: docker://a4bc07b60c85fd308157d967d2d0d688d8eeccfe4c829102eb929ca82fb25595
Image: saurabhmish/golang-hello:latest
Image ID: docker-pullable://saurabhmish/golang-hello#sha256:f79a495ad17710b569136f611ae3c8191173400e2cbb9cfe416e75e2af6f7874
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Mar 2022 07:09:50 -0700
Finished: Tue, 15 Mar 2022 07:09:50 -0700
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jt4gp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-jt4gp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m31s default-scheduler Successfully assigned default/go-api-55d74b9546-dkk9g to ip-172-16-1-191.ec2.internal
Normal Pulled 7m17s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 12.77458991s
Normal Pulled 7m16s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 110.127771ms
Normal Pulled 7m3s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 109.617419ms
Normal Created 6m37s (x4 over 7m17s) kubelet Created container go-api
Normal Started 6m37s (x4 over 7m17s) kubelet Started container go-api
Normal Pulled 6m37s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 218.952336ms
Normal Pulling 5m56s (x5 over 7m30s) kubelet Pulling image "saurabhmish/golang-hello:latest"
Normal Pulled 5m56s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 108.105083ms
Warning BackOff 2m28s (x24 over 7m15s) kubelet Back-off restarting failed container
Running kubectl logs PODNAME and kubectl logs PODNAME -c go-api shows standard_init_linux.go:228: exec user process caused: exec format error
Manifests:
go-deploy.yaml ( This is the Docker Hub Image with documentation )
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-api
labels:
app: go-api
spec:
replicas: 2
selector:
matchLabels:
app: go-api
strategy: {}
template:
metadata:
labels:
app: go-api
spec:
containers:
- name: go-api
image: saurabhmish/golang-hello:latest
ports:
- containerPort: 3000
resources: {}
go-service.yaml
---
kind: Service
apiVersion: v1
metadata:
name: go-api
spec:
selector:
app: go-api
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
How can I fix this error ?
Posting this as Community wiki for better visibility.
Feel free to expand it.
Thanks to #David Maze, who pointed to the solution. There is an article 'Build Intel64-compatible Docker images from Mac M1 (ARM)' (by Beppe Catanese) here.
This article describes the underlying problem well.
You are developing/building on the ARM architecture (Mac M1), but you deploy the docker image to a x86-64 architecture based Kubernetes cluster.
Solution:
Option A: use buildx
Buildx is a Docker plugin that allows, amongst other features, to build images for various target platforms.
$ docker buildx build --platform linux/amd64 -t myapp .
Option B: set DOCKER_DEFAULT_PLATFORM
The DOCKER_DEFAULT_PLATFORM environment variable permits to set the default platform for the commands that take the --platform flag.
export DOCKER_DEFAULT_PLATFORM=linux/amd64
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
Maybe the error come from the application itself that it can not connect to database, redis,...
You may find something useful here:
My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log
So i have launched a wordpress by following the documentation provided from https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ but i see that the mysql is running as a pod, but my requirement to connect the running mysql pod to AWS rds so that i can dump my existing info into it.Please guide me
pod/wordpress-5f444c8849-2rsfd 1/1 Running 0 27m
pod/wordpress-mysql-ccc857f6c-7hj9m 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 29m
service/wordpress LoadBalancer 10.100.148.152 a4a868cfc752f41fdb4397e3133c7001-1148081355.us-east-1.elb.amazonaws.com 80:32116/TCP 27m
service/wordpress-mysql ClusterIP None <none> 3306/TCP 27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 27m
deployment.apps/wordpress-mysql 1/1 1 1 27m
NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-5f444c8849 1 1 1 27m
replicaset.apps/wordpress-mysql-ccc857f6c 1 1 1 27m
Once you have mysql running on K9s as a cluster ip service, which is accessible only inside the cluster via it's own ip.
wordpress-mysql:3306
you can double check your database recreating your service as NodePort, then you would be able to connect via SQL Administrator like Workbench, then you would be able to admin it...
here is an example: https://www.youtube.com/watch?v=s0uIvplOqJM
I have created simple nginx deplopyment in Ubuntu EC2 instance and exposed to port through service in kubernetes cluster, but I am unable to ping the pods even in local envirnoment. My Pods are running fine and service is also created successfully. I am sharing some outputs of commands below
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-39-226 Ready <none> 2d19h v1.16.1
master-node Ready master 2d20h v1.16.1
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-54f57cf6bf-dqt5v 1/1 Running 0 101m 192.168.39.17 ip-172-31-39-226 <none> <none>
nginx-deployment-54f57cf6bf-gh4fz 1/1 Running 0 101m 192.168.39.16 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-2rcvt 1/1 Running 0 20m 192.168.39.18 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-tjh82 1/1 Running 0 20m 192.168.39.19 ip-172-31-39-226 <none> <none>
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
nginx-deployment NodePort 10.101.133.21 <none> 80:31165/TCP 50m
sample-nginx LoadBalancer 10.100.77.31 <pending> 80:31854/TCP 19m
kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 14 Oct 2019 06:28:13 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-54f57cf6bf (2/2 replicas created)
Events: <none>
Now I am unable to ping 192.168.39.17/16/18/19 from master, also not able to access curl 172.31.39.226:31165/31854 from master as well. Any help will be highly appreciated..
From the information, you have provided. And from the discussion we had the worker node has the Nginx pod running. And you have attached a NodePort Service and Load balancer Service to it.
The only thing which is missing here is the server from which you are trying to access this.
So, I tried to reach this URL 52.201.242.84:31165. I think all you need to do is whitelist this port for public access or the IP. This can be done via security group for the worker node EC2.
Now the URL above is constructed from the public IP of the worker node plus(+) the NodePort svc which is attached. Thus here is a simple formula you can use to get the exact address of the pod running.
Pod Access URL = Public IP of Worker Node + The NodePort
Following this document step by step:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?shortFooter=true
I created EKS cluster using aws cli instead-of UI. So I got the following output
proxy-kube$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18h
But when I am following this getting started and associating Worker nodes with the cluster, I get
proxy-kube$ kubectl get nodes
No resources found.
I can see 3 EC2 instances created and running in AWS console (UI).
But I am unable to deploy and run even Guestbook application.
When I deploy application, I get following:
~$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
guestbook LoadBalancer 10.100.46.244 a08e89122c10311e88fdd0e3fbea8df8-1146802048.us-east-1.elb.amazonaws.com 3000:32758/TCP 17s app=guestbook
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21h <none>
redis-master ClusterIP 10.100.208.141 <none> 6379/TCP 1m app=redis,role=master
redis-slave ClusterIP 10.100.226.147 <none>
But if I try to access EXTERNAL-IP, It shows
server is not reachable
in browser.
Also tried to get Dashboard for kubernetes but it failed to show anything on 127.0.0.1:8001
Does anyone know what might be going wrong?
Any help on this is appreciated.
Thanks
Looks you your kubelet (your node) is not registering with the master. If you don't have any nodes basically you can't run anything.
You can ssh into one of the nodes and check the logs in the kubelet with something like this:
journalctl -xeu kubelet
Also, it would help to post the output of kubectl describe deployment <deployment-name> and kubectl get pods
I have set up a SAP Vora2.1 installation on AWS using kops. It is a 4 node cluster with 1 master and 3 nodes. the persistent volume requirements for vsystem-vrep is provided using AWS-EFS and for other stateful components by using AWS-EBS. While the installation goes through fine and runs for few days but after 3-4 days following 5 vora pods starts showing some issues,
vora-catalog
Vora-relational
Vora-timeseries
vora-tx-coordinator
vora-disk
Each of these pods has 2 containers and both should be up and running. However after 3-4 days one of the containers goes down on its own although kubernetes cluster is up and running. I tried various ways to bring these pods up and running with all required containers in it but it does not come up.
I have captured events for vora-disk as sample but all of pods show same trace,
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 7m 21 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Warning Unhealthy Liveness probe failed: dial tcp 100.96.7.21:10002: getsockopt: connection refused
1h 2m 11 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Normal Killing Killing container with id docker://disk:pod "vora-disk-0_vora(2f5ea6df-545b-11e8-90fd-029979a0ef92)" container "disk" is unhealthy, it will be killed and re-created.
1h 58s 51 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal Warning FailedSync Error syncing pod
1h 58s 41 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Warning BackOff Back-off restarting failed container
1h 46s 11 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Normal Started Started container
1h 46s 11 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Normal Pulled Container image "ip-172-31-13-236.ap-southeast-2.compute.internal:5000/vora/dqp:2.1.32.19-vora-2.1" already present on machine
1h 46s 11 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Normal Created Created container
1h 1s 988 kubelet, ip-172-31-64-23.ap-southeast-2.compute.internal spec.containers{disk} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 503
Appreciate if any pointers to resolve this issue.
Thanks Frank for you suggestion and pointer. Definitely this has helped to overcome few issues but not all.
We have specifically observed issues related to Vora services going down for no reason. While we understand that there may be some reason why Vora goes down however the recovery procedure is not available either in admin guide or anywhere on internet. We have seen Vora services created by vora-operator going down (each of these pods contains one security container and other service specific container. Service specific container goes down and does not come up). we tried various options like restarting all vora pods or only restarting pods related to vora deployment operator but these pods do not come up. We are re-deploying Vora in such cases but that essentially means all previous work goes away. Is there any command or way so that Vora pods comes up with all container?
This issue is described in SAP Note 2631736 - Liveness and Readiness issue in Vora 2.x - it is suggested to increase the health check interval.