Kubernetes manifest apiserver, no forwarding? - amazon-web-services

I am working on building a kubernetes cluster on AWS using terraform, by reverse engineering the kube-aws script here:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
However when it is created, the kube-apiserver pod does not forward 443 to the host, so the api cannot be reached (it does forward 8080 to 127.0.0.1)
The manifest in question:
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: gcr.io/google_containers/hyperkube:${K8S_VER}
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd_servers=${ETCD_ENDPOINTS}
- --allow-privileged=true
- --service-cluster-ip-range=${SERVICE_IP_RANGE}
- --secure_port=443
- --advertise-address=${ADVERTISE_IP}
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --cloud-provider=aws
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
Some output:
ip-10-0-0-50 core # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
47d36516ada9 gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube apiserve 18 minutes ago Up 18 minutes k8s_kube-apiserver.daa12bc1_kube-apiserver-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_0ff7c6642d467da6eec9af9d96af0622_b88e9ada
48f85774ff5c gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube schedule 38 minutes ago Up 38 minutes k8s_kube-scheduler.cca58e1_kube-scheduler-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8aa2dd5e26e716aa54d97e2691e100e0_d6865ecb
1242789081a9 gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube controll 38 minutes ago Up 38 minutes k8s_kube-controller-manager.9ddfd2a0_kube-controller-manager-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_66bae8c21c0937cc285af054be236103_16b6bfb9
2ebafb2a3413 gcr.io/google_containers/hyperkube:v1.0.7 "/hyperkube proxy -- 38 minutes ago Up 38 minutes k8s_kube-proxy.de5c3084_kube-proxy-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_e6965a2424ca55206c44b02ad95f479e_dacdc559
ade9cd54f391 gcr.io/google_containers/pause:0.8.0 "/pause" 38 minutes ago Up 38 minutes k8s_POD.e4cc795_kube-scheduler-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8aa2dd5e26e716aa54d97e2691e100e0_b72b8dba
78633207462f gcr.io/google_containers/pause:0.8.0 "/pause" 38 minutes ago Up 38 minutes k8s_POD.e4cc795_kube-controller-manager-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_66bae8c21c0937cc285af054be236103_71057c93
b97643a86f51 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 39 minutes ago Up 39 minutes k8s_controller-manager-elector.663462cc_kube-podmaster-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8e57c3cada4c03fae8d01352505c25e5_0bb98126
0859c891679e gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 39 minutes ago Up 39 minutes k8s_scheduler-elector.468957a0_kube-podmaster-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8e57c3cada4c03fae8d01352505c25e5_fe401f47
e948e718f3d8 gcr.io/google_containers/pause:0.8.0 "/pause" 39 minutes ago Up 39 minutes k8s_POD.e4cc795_kube-apiserver-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_0ff7c6642d467da6eec9af9d96af0622_774d1393
eac6b18c0900 gcr.io/google_containers/pause:0.8.0 "/pause" 39 minutes ago Up 39 minutes k8s_POD.e4cc795_kube-podmaster-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_8e57c3cada4c03fae8d01352505c25e5_949f1945
6411aed07d40 gcr.io/google_containers/pause:0.8.0 "/pause" 39 minutes ago Up 39 minutes k8s_POD.e4cc795_kube-proxy-ip-10-0-0-50.eu-west-1.compute.internal_kube-system_e6965a2424ca55206c44b02ad95f479e_160a3b0f
ip-10-0-0-50 core # netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 1818/hyperkube
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 7966/hyperkube
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1335/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1800/hyperkube
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 1820/hyperkube
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 610/systemd-resolve
tcp6 0 0 :::10255 :::* LISTEN 1335/kubelet
tcp6 0 0 :::22 :::* LISTEN 1/systemd
tcp6 0 0 :::55447 :::* LISTEN 1800/hyperkube
tcp6 0 0 :::42274 :::* LISTEN 1800/hyperkube
tcp6 0 0 :::10250 :::* LISTEN 1335/kubelet
tcp6 0 0 :::5355 :::* LISTEN 610/systemd-resolve
udp 0 0 10.0.0.50:68 0.0.0.0:* 576/systemd-network
udp 0 0 0.0.0.0:8285 0.0.0.0:* 1456/flanneld
udp 0 0 0.0.0.0:5355 0.0.0.0:* 610/systemd-resolve
udp6 0 0 :::5355 :::* 610/systemd-resolve
udp6 0 0 :::52627 :::* 1800/
ip-10-0-0-50 core # docker logs 47d36516ada9
I1127 23:47:15.421827 1 aws.go:489] Zone not specified in configuration file; querying AWS metadata service
I1127 23:47:15.523047 1 aws.go:595] AWS cloud filtering on tags: map[KubernetesCluster:kubernetes]
I1127 23:47:15.692595 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
[restful] 2015/11/27 23:47:15 log.go:30: [restful/swagger] listing is available at https://10.0.0.50:443/swaggerapi/
[restful] 2015/11/27 23:47:15 log.go:30: [restful/swagger] https://10.0.0.50:443/swaggerui/ is mapped to folder /swagger-ui/
E1127 23:47:15.718842 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719005 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719150 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719307 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719457 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
E1127 23:47:15.719506 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
I1127 23:47:15.767717 1 server.go:441] Serving securely on 0.0.0.0:443
I1127 23:47:15.767796 1 server.go:483] Serving insecurely on 127.0.0.1:8080

So it immediately occurred to me to check the certificates that I was using after posting this (Rubberduck ftw.)
Turns out I was merely passing the wrong file to the tls-cert-file= argument.
After correcting it to the right one , everything started working straight away!

Related

Unable to connect browser to any of my docker images

I downloaded cookiecutter django to start a new project the other day. I spun it up (along with postgres, redis, etc) inside docker containers. The configuration files should be fine because they were all generated by coockicutter.
However, once I build and turn on the containers I am unable to see the "hello world" splash page when I connect to my localhost:8000. But there is something going wrong between the applications and the containers because I am able to connect to them via telnet and through docker exec -it commands etc. The only thing I can think of is some sort of permissions issue? So I gave all the files/directors 777 permissions to test that but that hasnt changed anything.
logs
% docker compose -f local.yml up
[+] Running 8/0
⠿ Container dashboard_local_docs Created 0.0s
⠿ Container dashboard_local_redis Created 0.0s
⠿ Container dashboard_local_mailhog Created 0.0s
⠿ Container dashboard_local_postgres Created 0.0s
⠿ Container dashboard_local_django Created 0.0s
⠿ Container dashboard_local_celeryworker Created 0.0s
⠿ Container dashboard_local_celerybeat Created 0.0s
⠿ Container dashboard_local_flower Created 0.0s
Attaching to dashboard_local_celerybeat, dashboard_local_celeryworker, dashboard_local_django, dashboard_local_docs, dashboard_local_flower, dashboard_local_mailhog, dashboard_local_postgres, dashboard_local_redis
dashboard_local_postgres |
dashboard_local_postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization
dashboard_local_postgres |
dashboard_local_postgres | 2022-07-07 14:36:15.969 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv6 address "::", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
dashboard_local_postgres | 2022-07-07 14:36:15.999 UTC [26] LOG: database system was shut down at 2022-07-07 14:35:47 UTC
dashboard_local_postgres | 2022-07-07 14:36:16.004 UTC [1] LOG: database system is ready to accept connections
dashboard_local_mailhog | 2022/07/07 14:36:16 Using in-memory storage
dashboard_local_mailhog | 2022/07/07 14:36:16 [SMTP] Binding to address: 0.0.0.0:1025
dashboard_local_mailhog | 2022/07/07 14:36:16 Serving under http://0.0.0.0:8025/
dashboard_local_mailhog | [HTTP] Binding to address: 0.0.0.0:8025
dashboard_local_mailhog | Creating API v1 with WebPath:
dashboard_local_mailhog | Creating API v2 with WebPath:
dashboard_local_docs | sphinx-autobuild -b html --host 0.0.0.0 --port 9000 --watch /app -c . . ./_build/html
dashboard_local_docs | [sphinx-autobuild] > sphinx-build -b html -c . /docs /docs/_build/html
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * monotonic clock: POSIX clock_gettime
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * Running mode=standalone, port=6379.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # Server initialized
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Loading RDB produced by version 6.2.7
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB age 30 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB memory usage when created 0.78 Mb
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 # Done loading RDB, keys loaded: 3, keys expired: 0.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * DB loaded from disk: 0.000 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Ready to accept connections
dashboard_local_docs | Running Sphinx v5.0.1
dashboard_local_celeryworker | PostgreSQL is available
dashboard_local_celerybeat | PostgreSQL is available
dashboard_local_docs | loading pickled environment... done
dashboard_local_docs | building [mo]: targets for 0 po files that are out of date
dashboard_local_docs | building [html]: targets for 0 source files that are out of date
dashboard_local_docs | updating environment: 0 added, 0 changed, 0 removed
dashboard_local_docs | looking for now-outdated files... none found
dashboard_local_docs | no targets are out of date.
dashboard_local_docs | build succeeded.
dashboard_local_docs |
dashboard_local_docs | The HTML pages are in _build/html.
dashboard_local_docs | [I 220707 14:36:18 server:335] Serving on http://0.0.0.0:9000
dashboard_local_celeryworker | [14:36:18] watching "/app" and reloading "celery.__main__.main" on changes...
dashboard_local_docs | [I 220707 14:36:18 handlers:62] Start watching changes
dashboard_local_docs | [I 220707 14:36:18 handlers:64] Start detecting changes
dashboard_local_django | PostgreSQL is available
dashboard_local_celerybeat | celery beat v5.2.7 (dawn-chorus) is starting.
dashboard_local_flower | PostgreSQL is available
dashboard_local_celerybeat | __ - ... __ - _
dashboard_local_celerybeat | LocalTime -> 2022-07-07 09:36:19
dashboard_local_celerybeat | Configuration ->
dashboard_local_celerybeat | . broker -> redis://redis:6379/0
dashboard_local_celerybeat | . loader -> celery.loaders.app.AppLoader
dashboard_local_celerybeat | . scheduler -> django_celery_beat.schedulers.DatabaseScheduler
dashboard_local_celerybeat |
dashboard_local_celerybeat | . logfile -> [stderr]#%INFO
dashboard_local_celerybeat | . maxinterval -> 5.00 seconds (5s)
dashboard_local_celerybeat | [2022-07-07 09:36:19,658: INFO/MainProcess] beat: Starting...
dashboard_local_celeryworker | /usr/local/lib/python3.9/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
dashboard_local_celeryworker | absolutely not recommended!
dashboard_local_celeryworker |
dashboard_local_celeryworker | Please specify a different user using the --uid option.
dashboard_local_celeryworker |
dashboard_local_celeryworker | User information: uid=0 euid=0 gid=0 egid=0
dashboard_local_celeryworker |
dashboard_local_celeryworker | warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
dashboard_local_celeryworker |
dashboard_local_celeryworker | -------------- celery#e1ac9f770cbd v5.2.7 (dawn-chorus)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -- ******* ---- Linux-5.4.0-96-generic-x86_64-with-glibc2.31 2022-07-07 09:36:19
dashboard_local_celeryworker | - *** --- * ---
dashboard_local_celeryworker | - ** ---------- [config]
dashboard_local_celeryworker | - ** ---------- .> app: dashboard:0x7fd9dcaeb1c0
dashboard_local_celeryworker | - ** ---------- .> transport: redis://redis:6379/0
dashboard_local_celeryworker | - ** ---------- .> results: redis://redis:6379/0
dashboard_local_celeryworker | - *** --- * --- .> concurrency: 8 (prefork)
dashboard_local_celeryworker | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -------------- [queues]
dashboard_local_celeryworker | .> celery exchange=celery(direct) key=celery
dashboard_local_celeryworker |
dashboard_local_celeryworker |
dashboard_local_celeryworker | [tasks]
dashboard_local_celeryworker | . dashboard.users.tasks.get_users_count
dashboard_local_celeryworker |
dashboard_local_django | Operations to perform:
dashboard_local_django | Apply all migrations: account, admin, auth, authtoken, contenttypes, django_celery_beat, sessions, sites, socialaccount, users
dashboard_local_django | Running migrations:
dashboard_local_django | No migrations to apply.
dashboard_local_flower | INFO 2022-07-07 09:36:20,646 command 7 140098896897856 Visit me at http://localhost:5555
dashboard_local_flower | INFO 2022-07-07 09:36:20,652 command 7 140098896897856 Broker: redis://redis:6379/0
dashboard_local_flower | INFO 2022-07-07 09:36:20,655 command 7 140098896897856 Registered tasks:
dashboard_local_flower | ['celery.accumulate',
dashboard_local_flower | 'celery.backend_cleanup',
dashboard_local_flower | 'celery.chain',
dashboard_local_flower | 'celery.chord',
dashboard_local_flower | 'celery.chord_unlock',
dashboard_local_flower | 'celery.chunks',
dashboard_local_flower | 'celery.group',
dashboard_local_flower | 'celery.map',
dashboard_local_flower | 'celery.starmap',
dashboard_local_flower | 'dashboard.users.tasks.get_users_count']
dashboard_local_flower | INFO 2022-07-07 09:36:20,663 mixins 7 140098817644288 Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,792: INFO/SpawnProcess-1] Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,794: INFO/SpawnProcess-1] mingle: searching for neighbors
dashboard_local_flower | WARNING 2022-07-07 09:36:21,700 inspector 7 140098800826112 Inspect method active_queues failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,710 inspector 7 140098766993152 Inspect method reserved failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,712 inspector 7 140098784040704 Inspect method scheduled failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,714 inspector 7 140098758600448 Inspect method revoked failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098792433408 Inspect method registered failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098276423424 Inspect method conf failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098809218816 Inspect method stats failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,716 inspector 7 140098775648000 Inspect method active failed
dashboard_local_celeryworker | [2022-07-07 09:36:21,802: INFO/SpawnProcess-1] mingle: all alone
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: WARNING/SpawnProcess-1] /usr/local/lib/python3.9/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
dashboard_local_celeryworker | leak, never use this setting in production environments!
dashboard_local_celeryworker | warnings.warn('''Using settings.DEBUG leads to a memory
dashboard_local_celeryworker |
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: INFO/SpawnProcess-1] celery#e1ac9f770cbd ready.
dashboard_local_django | Watching for file changes with StatReloader
dashboard_local_django | INFO 2022-07-07 09:36:22,862 autoreload 9 140631340287808 Watching for file changes with StatReloader
dashboard_local_django | Performing system checks...
dashboard_local_django |
dashboard_local_django | System check identified no issues (0 silenced).
dashboard_local_django | July 07, 2022 - 09:36:23
dashboard_local_django | Django version 3.2.14, using settings 'config.settings.local'
dashboard_local_django | Starting development server at http://0.0.0.0:8000/
dashboard_local_django | Quit the server with CONTROL-C.
dashboard_local_celeryworker | [2022-07-07 09:36:25,661: INFO/SpawnProcess-1] Events of group {task} enabled by remote.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69591187e44d dashboard_local_flower "/entrypoint /start-…" 11 minutes ago Up 2 minutes 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp dashboard_local_flower
15914b6b91e0 dashboard_local_celerybeat "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celerybeat
e1ac9f770cbd dashboard_local_celeryworker "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celeryworker
6bbfc900c346 dashboard_local_django "/entrypoint /start" 11 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp dashboard_local_django
b8bec3422bae redis:6 "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 6379/tcp dashboard_local_redis
2b7c3d9eabe3 dashboard_production_postgres "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 5432/tcp dashboard_local_postgres
0249aaaa040c mailhog/mailhog:v1.0.0 "MailHog" 11 minutes ago Up 2 minutes 1025/tcp, 0.0.0.0:8025->8025/tcp, :::8025->8025/tcp dashboard_local_mailhog
d5dd94cbb070 dashboard_local_docs "/start-docs" 11 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp dashboard_local_docs
the ports are listening
telnet 127.0.0.1 8000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^]
% sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 127.0.0.1:43979 0.0.0.0:* LISTEN 31867/BlastServer
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 963/rpcbind
tcp 0 0 0.0.0.0:46641 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:51857 0.0.0.0:* LISTEN 4149/rpc.statd
tcp 0 0 0.0.0.0:5555 0.0.0.0:* LISTEN 14326/docker-proxy
tcp 0 0 0.0.0.0:6100 0.0.0.0:* LISTEN 31908/Xorg
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 973/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 29295/sshd
tcp 0 0 0.0.0.0:8025 0.0.0.0:* LISTEN 13769/docker-proxy
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 30117/master
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 882/sshd: noakes#no
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 14272/docker-proxy
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 13850/docker-proxy
tcp6 0 0 :::139 :::* LISTEN 29532/smbd
tcp6 0 0 :::40717 :::* LISTEN -
tcp6 0 0 :::41423 :::* LISTEN 4149/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 963/rpcbind
tcp6 0 0 127.0.0.1:41265 :::* LISTEN 30056/java
tcp6 0 0 :::5555 :::* LISTEN 14333/docker-proxy
tcp6 0 0 :::6100 :::* LISTEN 31908/Xorg
tcp6 0 0 :::22 :::* LISTEN 29295/sshd
tcp6 0 0 :::13782 :::* LISTEN 2201/xinetd
tcp6 0 0 :::13783 :::* LISTEN 2201/xinetd
tcp6 0 0 :::8025 :::* LISTEN 13779/docker-proxy
tcp6 0 0 ::1:25 :::* LISTEN 30117/master
tcp6 0 0 ::1:6010 :::* LISTEN 882/sshd: noakes#no
tcp6 0 0 :::13722 :::* LISTEN 2201/xinetd
tcp6 0 0 :::6556 :::* LISTEN 2201/xinetd
tcp6 0 0 :::445 :::* LISTEN 29532/smbd
tcp6 0 0 :::8000 :::* LISTEN 14278/docker-proxy
tcp6 0 0 :::1057 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7778 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7779 :::* LISTEN 2201/xinetd
tcp6 0 0 :::9000 :::* LISTEN 13860/docker-proxy
local.yml
version: '3'
volumes:
dashboard_local_postgres_data: {}
dashboard_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
#user: "root:root"
image: dashboard_local_django
container_name: dashboard_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: dashboard_production_postgres
container_name: dashboard_local_postgres
volumes:
- dashboard_local_postgres_data:/var/lib/postgresql/data:Z
- dashboard_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: dashboard_local_docs
container_name: dashboard_local_docs
platform: linux/x86_64
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./dashboard:/app/dashboard:z
ports:
- "9000:9000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: dashboard_local_mailhog
ports:
- "8025:8025"
redis:
image: redis:6
container_name: dashboard_local_redis
celeryworker:
<<: *django
image: dashboard_local_celeryworker
container_name: dashboard_local_celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: dashboard_local_celerybeat
container_name: dashboard_local_celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: dashboard_local_flower
container_name: dashboard_local_flower
ports:
- "5555:5555"
command: /start-flower

Jaeger is not showing any trace results

I am able to run kiali fine. But it Jaeger is not showing any results. I'm using virtualbox for this exercise. In order for me to view it in my local browser I'm using port forwarding.
I think this is communication issue between pods.
Below is what I'm using.
Virtualbox
Minimal install of CentOS_8.4.2105
istio-1.11.4
Docker version 20.10.9, build c2ea9bc
minikube version: v1.23.2
[centos#centos8 bin]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
here is my Kiali
Below is my Jaeger
[centos#centos8 warmup-exercise]$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-7bdcf77687-w5hvt 1/1 Running 0 3h58m
istio-egressgateway-5547fcc8fc-qsd2l 1/1 Running 0 3h58m
istio-ingressgateway-8f568d595-j6wzd 1/1 Running 0 3h58m
istiod-6659979bdf-9chbn 1/1 Running 0 3h58m
jaeger-5c7c5c8d87-p5678 1/1 Running 0 3h58m
kiali-7fd9f6f484-vlxms 1/1 Running 0 3h58m
prometheus-f5f544b59-br5n4 2/2 Running 0 3h58m
[centos#centos8 warmup-exercise]$ kubectl --namespace istio-system describe pod/jaeger-5c7c5c8d87-p5678
Name: jaeger-5c7c5c8d87-p5678
Namespace: istio-system
Priority: 0
Node: minikube/192.168.49.2
Start Time: Thu, 21 Oct 2021 12:42:47 -0400
Labels: app=jaeger
pod-template-hash=5c7c5c8d87
Annotations: prometheus.io/port: 14269
prometheus.io/scrape: true
sidecar.istio.io/inject: false
Status: Running
IP: 172.17.0.6
IPs:
IP: 172.17.0.6
Controlled By: ReplicaSet/jaeger-5c7c5c8d87
Containers:
jaeger:
Container ID: docker://3e155e7909f9f9976184b0b8f72880307d6bb7f8810d98c25d2dd8f18df342bb
Image: docker.io/jaegertracing/all-in-one:1.20
Image ID: docker-pullable://jaegertracing/all-in-one#sha256:54c2ea315dab7215c51c1b06b111c666f594e90317584f84eabbc59aa5856b13
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 21 Oct 2021 12:49:26 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Liveness: http-get http://:14269/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:14269/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
BADGER_EPHEMERAL: false
SPAN_STORAGE_TYPE: badger
BADGER_DIRECTORY_VALUE: /badger/data
BADGER_DIRECTORY_KEY: /badger/key
COLLECTOR_ZIPKIN_HTTP_PORT: 9411
MEMORY_MAX_TRACES: 50000
QUERY_BASE_PATH: /jaeger
Mounts:
/badger from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tj4pj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-tj4pj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 51m (x3 over 102m) kubelet Readiness probe failed: Get "http://172.17.0.6:14269/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 51m (x5 over 160m) kubelet Liveness probe failed: Get "http://172.17.0.6:14269/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning NodeNotReady 51m node-controller Node is not ready
[centos#centos8 warmup-exercise]$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api-gateway-5cd5c547c6-lrt6k 2/2 Running 0 3h30m 172.17.0.13 minikube <none> <none>
photo-service-7c79458679-trblk 2/2 Running 0 3h30m 172.17.0.11 minikube <none> <none>
position-simulator-6c7b7949f8-k2z7t 2/2 Running 0 3h30m 172.17.0.14 minikube <none> <none>
position-tracker-cbbc8b7f6-dl4gz 2/2 Running 0 3h30m 172.17.0.12 minikube <none> <none>
staff-service-6597879677-7zh2c 2/2 Running 0 3h30m 172.17.0.15 minikube <none> <none>
vehicle-telemetry-c8fcb46c6-n9764 2/2 Running 0 3h30m 172.17.0.10 minikube <none> <none>
webapp-85fd946885-zdjck 2/2 Running 0 3h30m 172.17.0.16 minikube <none> <none>
I'm still learning devops. Let me know if I missed something.
The default sample rate for Jaeger is 1% : https://istio.io/latest/docs/tasks/observability/distributed-tracing/jaeger/

Stunnel for Elasticchache Redis(cluster mode enabled)

I have spin up Elasticcache Redis cluster mode enabled cluster on AWS. I am having 3 master shards and 1 replica for each(total 3 replicas). I have turn on in-transit encryption. For this I have installed stunnel on my EC2 instance and my config file looks like. 3001 is my cluster port
[redis-cli]
client = yes
accept = 127.0.0.1:3001
connect = master1_url:3001
[redis-cli1]
client = yes
accept = 127.0.0.1:3002
connect = replica1_url.com:3001
[redis-cli2]
client = yes
accept = 127.0.0.1:3003
connect = master2_url:3001
[redis-cli3]
client = yes
accept = 127.0.0.1:3004
connect = replica2_url.com:3001
[redis-cli4]
client = yes
accept = 127.0.0.1:3005
connect = master3_url:3001
[redis-cli3]
client = yes
accept = 127.0.0.1:3006
connect = replica3_url.com:3001
----------------------------------------------
sudo netstat -tulnp | grep -i stunnel
tcp 0 0 127.0.0.1:3001 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3002 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3003 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3004 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3005 0.0.0.0:* LISTEN 32272/stunnel
tcp 0 0 127.0.0.1:3006 0.0.0.0:* LISTEN 32272/stunnel
When I connect using localhost (src/redis-cli -c -h localhost -p 3001)my connection is successful. But when I hit "get key" it stuck with following
localhost:3001> get key
-> Redirected to slot [12539] located at master3_url:3001
If I change the cluster to single shard and single replica all works fine. What setting I am missing when using multi shards cluster. My Ec2 instance is open to accept connection on all ports. Redis Cluster is open to accept connection from Ec2 instance on all ports.
This is my first question on stackoverflow :)

Chain KUBE-SERVICES - Rejects Service has no endpoints

Trying to curl the service deployed in k8s cluster from the master node
curl: (7) Failed to connect to localhost port 31796: Connection
refused
For kubernetes cluster, when I check my iptables on master I get the following .
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 10.100.94.202 /*
default/some-service: has no endpoints */ tcp dpt:9015 reject-with
icmp-port-unreachable
REJECT tcp -- anywhere 10.103.64.79 /*
default/some-service: has no endpoints */ tcp dpt:9000 reject-with
icmp-port-unreachable
REJECT tcp -- anywhere 10.107.111.252 /*
default/some-service: has no endpoints */ tcp dpt:9015 reject-with
icmp-port-unreachable
if I flush my iptables with
iptables -F
and then curl
curl -v localhost:31796
I get the following
* Rebuilt URL to: localhost:31796/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 31796 (#0)
> GET / HTTP/1.1
> Host: localhost:31796
> User-Agent: curl/7.58.0
> Accept: */*
but soon after it results in
* Rebuilt URL to: localhost:31796/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 31796 failed: Connection refused
* Failed to connect to localhost port 31796: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 31796: Connection
refused
I'm using the nodePort concept in my service
Details
kubectl get node
NAME STATUS ROLES AGE VERSION
ip-Master-IP Ready master 26h v1.12.7
ip-Node1-ip Ready <none> 26h v1.12.7
ip-Node2-ip Ready <none> 23h v1.12.7
Kubectl get pods
NAME READY STATUS RESTARTS AGE
config-service-7dc8fc4ff-5kk88 1/1 Running 0 5h49m
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE SELECTOR
cadmin-server NodePort 10.109.55.255 <none>
9015:31796/TCP 22h app=config-service
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
26h <none>
Kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
endpoint.yml
apiVersion: v1
kind: Endpoints
metadata:
name: xyz
subsets:
- addresses:
- ip: node1_ip
- ip: node2_ip
ports:
- port: 31796
- name: xyz
service.yml
apiVersion: v1
kind: Service
metadata:
name: xyz
namespace: default
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /xyz
labels:
app: xyz
spec:
type: NodePort
ports:
- nodePort: 31796
port: 8001
targetPort: 8001
protocol: TCP
selector:
app: xyz
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: xyz
name: xyz
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: xyz
template:
metadata:
labels:
app: xyz
spec:
containers:
- name: xyz
image: abc
ports:
- containerPort: 8001
imagePullPolicy: Always
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /app/
name: config-volume
restartPolicy: Always
imagePullSecrets:
- name: awslogin
volumes:
- configMap:
name: xyz
name: config-volume
You can run the following command to check endpoints.
kubectl get endpoints
If endpoint is not showing up for the service. Please check the yml files that you used for creating the loadbalancer and the deployment. Make sure the labels match.
As many have pointed out in their comments the Firewall Rule "no endpoints" is inserted by the kubelet service and indicates a broken Service Application Definition or Setup.
# iptables-save
# Generated by iptables-save v1.4.21 on Wed Feb 24 10:10:23 2021
*filter
# [...]
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "default/web-service:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 30081 -j REJECT --reject-with icmp-port-unreachable
# [...]
As you have noticed as well the service kubelet constantly monitors the Firewall Rules and inserts or deletes rules dynamically according to the Kubernetes Pod or Service definitions.
# kubectl get service --namespace=default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 198d
web-service NodePort 10.111.188.199 <none> 8201:30081/TCP 194d
# kubectl get pods --namespace=default
No resources found in default namespace.
In this example case a Service is defined but the Pod associated with the Service does not exist.
Still the kube-proxy process listens on the port 30081:
# netstat -lpn | grep -i kube
[...]
tcp 0 0 0.0.0.0:30081 0.0.0.0:* LISTEN 21542/kube-proxy
[...]
So the kubelet service inserts a firewall rule to prevent the traffic for the broken service.
Also the kubelet service will delete this rule as soon as you delete the Service definition
# kubectl delete service web-service --namespace=default
service "web-service" deleted
# iptables-save | grep -i "no endpoints" | wc -l
0
As a Side Node:
This rule is also inserted for Kubernetes Definitions that the kubelet Service doesn't like.
As an example your service can have the name "log-service" but can't have the name "web-log".
In the latter case the kubelet Service didn't give a warning but inserted this blocking rule

How to connect to GKE postgresql svc in GCP?

I'm trying to connect to the postgresql service (pod) in my kubernetes deployment but I GCP does not give a port (so I can not use something like: $ psql -h localhost -U postgresadmin1 --password -p 31070 postgresdb to connect to Postgresql and see my database).
I'm using a LoadBalancer in my service:
#cloudshell:~ (academic-veld-230622)$ psql -h 35.239.52.68 -U jhipsterpress --password -p 30728 jhipsterpress-postgresql
Password for user jhipsterpress:
psql: could not connect to server: Connection timed out
Is the server running on host "35.239.52.68" and accepting
TCP/IP connections on port 30728?
apiVersion: v1
kind: Service
metadata:
name: jhipsterpress
namespace: default
labels:
app: jhipsterpress
spec:
selector:
app: jhipsterpress
type: LoadBalancer
ports:
- name: http
port: 8080
NAME READY STATUS RESTARTS AGE
pod/jhipsterpress-84886f5cdf-mpwgb 1/1 Running 0 31m
pod/jhipsterpress-postgresql-5956df9557-fg8cn 1/1 Running 0 31m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jhipsterpress LoadBalancer 10.11.243.22 35.184.135.134 8080:32670/TCP 31m
service/jhipsterpress-postgresql LoadBalancer 10.11.255.64 35.239.52.68 5432:30728/TCP 31m
service/kubernetes ClusterIP 10.11.240.1 <none> 443/TCP 35m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/jhipsterpress 1 1 1 1 31m
deployment.apps/jhipsterpress-postgresql 1 1 1 1 31m
NAME DESIRED CURRENT READY AGE
replicaset.apps/jhipsterpress-84886f5cdf 1 1 1 31m
replicaset.apps/jhipsterpress-postgresql-5956df9557 1 1 1 31m
#cloudshell:~ (academic-veld-230622)$ kubectl describe pod jhipsterpress-postgresql
Name: jhipsterpress-postgresql-5956df9557-fg8cn
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-standard-cluster-1-default-pool-bf9f446d-9hsq/10.128.0.58
Start Time: Sat, 06 Apr 2019 13:39:08 +0200
Labels: app=jhipsterpress-postgresql
pod-template-hash=1512895113
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container postgres
Status: Running
IP: 10.8.0.14
Controlled By: ReplicaSet/jhipsterpress-postgresql-5956df9557
Containers:
postgres:
Container ID: docker://55475d369c63da4d9bdc208e9d43c457f74845846fb4914c88c286ff96d0e45a
Image: postgres:10.4
Image ID: docker-pullable://postgres#sha256:9625c2fb34986a49cbf2f5aa225d8eb07346f89f7312f7c0ea19d82c3829fdaa
Port: 5432/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 06 Apr 2019 13:39:29 +0200
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment:
POSTGRES_USER: jhipsterpress
POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'jhipsterpress-postgresql'> Optional: false
Mounts:
/var/lib/pgsql/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mlmm5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: spingular-bucket
ReadOnly: false
default-token-mlmm5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mlmm5
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 33m (x3 over 33m) default-scheduler persistentvolumeclaim "spingular-bucket" not found
Warning FailedScheduling 33m (x3 over 33m) default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 33m default-scheduler Successfully assigned default/jhipsterpress-postgresql-5956df9557-fg8cn to gke-standard-cluster-1-default-pool-bf9f446d-9hsq
Normal SuccessfulAttachVolume 33m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-95ba1737-5860-11e9-ae59-42010a8000a8"
Normal Pulling 33m kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq pulling image "postgres:10.4"
Normal Pulled 32m kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq Successfully pulled image "postgres:10.4"
Normal Created 32m kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq Created container
Normal Started 32m kubelet, gke-standard-cluster-1-default-pool-bf9f446d-9hsq Started container
With the open firewall: posgresql-jhipster
Ingress
Apply to all
IP ranges: 0.0.0.0/0
tcp:30728
Allow
999
default
Thanks for your help. Any documentation is really appreciated.
Your service is currently a type clusterIP. This does not expose the service or the pods outside the cluster. You can't connect to the pod from the Cloud Shell like this since the Cloud shell is not on your VPC and the pods are not exposed.
Update your service using kubectl edit svc jhipsterpress-postgresql
Change the spec.type field to 'LoadBalancer'
You will then have an external IP that you can connect to