Docker redis sync fell into error loop on remote server - django

I am running Django dev server in docker with celery and redis on my remote host machine.
Everything works fine like 30 mins, and then redis starts falling into infinite loop while MASTER <-> REPLICA sync started
Here's the console output:
redis_1 | 1:S 16 Feb 2023 17:42:37.119 * Non blocking connect for SYNC fired the event.
redis_1 | 1:S 16 Feb 2023 17:42:37.805 # Failed to read response from the server: No error information
redis_1 | 1:S 16 Feb 2023 17:42:37.805 # Master did not respond to command during SYNC handshake
redis_1 | 1:S 16 Feb 2023 17:42:38.057 * Connecting to MASTER 194.40.243.205:8886
redis_1 | 1:S 16 Feb 2023 17:42:38.058 * MASTER <-> REPLICA sync started
redis_1 | 1:S 16 Feb 2023 17:42:38.111 * Non blocking connect for SYNC fired the event.
redis_1 | 1:S 16 Feb 2023 17:42:39.194 * Master replied to PING, replication can continue...
redis_1 | 1:S 16 Feb 2023 17:42:39.367 * Partial resynchronization not possible (no cached master)
redis_1 | 1:S 16 Feb 2023 17:42:39.449 * Full resync from master: ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ:1
redis_1 | 1:S 16 Feb 2023 17:42:39.449 * MASTER <-> REPLICA sync: receiving 54992 bytes from master to disk
redis_1 | 1:S 16 Feb 2023 17:42:39.607 * MASTER <-> REPLICA sync: Flushing old data
redis_1 | 1:S 16 Feb 2023 17:42:39.608 * MASTER <-> REPLICA sync: Loading DB in memory
redis_1 | 1:S 16 Feb 2023 17:42:39.621 # Wrong signature trying to load DB from file
redis_1 | 1:S 16 Feb 2023 17:42:39.623 # Failed trying to load the MASTER synchronization DB from disk: Invalid argument
redis_1 | 1:S 16 Feb 2023 17:42:39.624 * Reconnecting to MASTER 194.40.243.205:8886 after failure
redis_1 | 1:S 16 Feb 2023 17:42:39.625 * MASTER <-> REPLICA sync started
redis_1 | 1:S 16 Feb 2023 17:42:39.709 * Non blocking connect for SYNC fired the event.
redis_1 | 1:S 16 Feb 2023 17:42:40.891 # Failed to read response from the server: No error information
redis_1 | 1:S 16 Feb 2023 17:42:40.891 # Master did not respond to command during SYNC handshake
redis_1 | 1:S 16 Feb 2023 17:42:41.069 * Connecting to MASTER 194.40.243.205:8886
redis_1 | 1:S 16 Feb 2023 17:42:41.069 * MASTER <-> REPLICA sync started
redis_1 | 1:S 16 Feb 2023 17:42:41.128 * Non blocking connect for SYNC fired the event.
redis_1 | 1:S 16 Feb 2023 17:42:42.167 # Failed to read response from the server: No error information
redis_1 | 1:S 16 Feb 2023 17:42:42.167 # Master did not respond to command during SYNC handshake
redis_1 | 1:S 16 Feb 2023 17:42:43.074 * Connecting to MASTER 194.40.243.205:8886
and this is only a few seconds.
My docker-compose file:
services:
redis:
image: redis:latest
restart: always
ports:
- "6379:6379"
django:
image: docker-app:django
container_name: django-app
build: .
volumes:
- .:/app/
ports:
- "8000:8000"
celery:
image: celery-app
restart: on-failure
build: .
command:
- celery
- -A
- myapp.celery_app
- worker
- -B
- --loglevel=INFO
volumes:
- .:/app/
container_name: celery
depends_on:
- django
I have no idea what's wrong here, 'cause on local machine everything works fine. I aslo tried apk-get update && apk-get upgrade, nothing changed
EDIT
Output on redis start
redis_1 | 1:C 16 Feb 2023 18:02:56.934 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 16 Feb 2023 18:02:56.935 # Redis version=7.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 16 Feb 2023 18:02:56.935 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 16 Feb 2023 18:02:56.936 * Increased maximum number of open files to 10032 (it was originally set to 1024).
redis_1 | 1:M 16 Feb 2023 18:02:56.936 * monotonic clock: POSIX clock_gettime
redis_1 | 1:M 16 Feb 2023 18:02:56.943 * Running mode=standalone, port=6379.
redis_1 | 1:M 16 Feb 2023 18:02:56.943 # Server initialized
redis_1 | 1:M 16 Feb 2023 18:02:56.945 * Loading RDB produced by version 7.0.8
redis_1 | 1:M 16 Feb 2023 18:02:56.945 * RDB age 1201 seconds
redis_1 | 1:M 16 Feb 2023 18:02:56.946 * RDB memory usage when created 1.44 Mb
redis_1 | 1:M 16 Feb 2023 18:02:56.946 * Done loading RDB, keys loaded: 0, keys expired: 0.
redis_1 | 1:M 16 Feb 2023 18:02:56.946 * DB loaded from disk: 0.001 seconds
redis_1 | 1:M 16 Feb 2023 18:02:56.946 * Ready to accept connections

Related

Unable to connect browser to any of my docker images

I downloaded cookiecutter django to start a new project the other day. I spun it up (along with postgres, redis, etc) inside docker containers. The configuration files should be fine because they were all generated by coockicutter.
However, once I build and turn on the containers I am unable to see the "hello world" splash page when I connect to my localhost:8000. But there is something going wrong between the applications and the containers because I am able to connect to them via telnet and through docker exec -it commands etc. The only thing I can think of is some sort of permissions issue? So I gave all the files/directors 777 permissions to test that but that hasnt changed anything.
logs
% docker compose -f local.yml up
[+] Running 8/0
⠿ Container dashboard_local_docs Created 0.0s
⠿ Container dashboard_local_redis Created 0.0s
⠿ Container dashboard_local_mailhog Created 0.0s
⠿ Container dashboard_local_postgres Created 0.0s
⠿ Container dashboard_local_django Created 0.0s
⠿ Container dashboard_local_celeryworker Created 0.0s
⠿ Container dashboard_local_celerybeat Created 0.0s
⠿ Container dashboard_local_flower Created 0.0s
Attaching to dashboard_local_celerybeat, dashboard_local_celeryworker, dashboard_local_django, dashboard_local_docs, dashboard_local_flower, dashboard_local_mailhog, dashboard_local_postgres, dashboard_local_redis
dashboard_local_postgres |
dashboard_local_postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization
dashboard_local_postgres |
dashboard_local_postgres | 2022-07-07 14:36:15.969 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv6 address "::", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
dashboard_local_postgres | 2022-07-07 14:36:15.999 UTC [26] LOG: database system was shut down at 2022-07-07 14:35:47 UTC
dashboard_local_postgres | 2022-07-07 14:36:16.004 UTC [1] LOG: database system is ready to accept connections
dashboard_local_mailhog | 2022/07/07 14:36:16 Using in-memory storage
dashboard_local_mailhog | 2022/07/07 14:36:16 [SMTP] Binding to address: 0.0.0.0:1025
dashboard_local_mailhog | 2022/07/07 14:36:16 Serving under http://0.0.0.0:8025/
dashboard_local_mailhog | [HTTP] Binding to address: 0.0.0.0:8025
dashboard_local_mailhog | Creating API v1 with WebPath:
dashboard_local_mailhog | Creating API v2 with WebPath:
dashboard_local_docs | sphinx-autobuild -b html --host 0.0.0.0 --port 9000 --watch /app -c . . ./_build/html
dashboard_local_docs | [sphinx-autobuild] > sphinx-build -b html -c . /docs /docs/_build/html
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * monotonic clock: POSIX clock_gettime
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * Running mode=standalone, port=6379.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # Server initialized
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Loading RDB produced by version 6.2.7
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB age 30 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB memory usage when created 0.78 Mb
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 # Done loading RDB, keys loaded: 3, keys expired: 0.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * DB loaded from disk: 0.000 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Ready to accept connections
dashboard_local_docs | Running Sphinx v5.0.1
dashboard_local_celeryworker | PostgreSQL is available
dashboard_local_celerybeat | PostgreSQL is available
dashboard_local_docs | loading pickled environment... done
dashboard_local_docs | building [mo]: targets for 0 po files that are out of date
dashboard_local_docs | building [html]: targets for 0 source files that are out of date
dashboard_local_docs | updating environment: 0 added, 0 changed, 0 removed
dashboard_local_docs | looking for now-outdated files... none found
dashboard_local_docs | no targets are out of date.
dashboard_local_docs | build succeeded.
dashboard_local_docs |
dashboard_local_docs | The HTML pages are in _build/html.
dashboard_local_docs | [I 220707 14:36:18 server:335] Serving on http://0.0.0.0:9000
dashboard_local_celeryworker | [14:36:18] watching "/app" and reloading "celery.__main__.main" on changes...
dashboard_local_docs | [I 220707 14:36:18 handlers:62] Start watching changes
dashboard_local_docs | [I 220707 14:36:18 handlers:64] Start detecting changes
dashboard_local_django | PostgreSQL is available
dashboard_local_celerybeat | celery beat v5.2.7 (dawn-chorus) is starting.
dashboard_local_flower | PostgreSQL is available
dashboard_local_celerybeat | __ - ... __ - _
dashboard_local_celerybeat | LocalTime -> 2022-07-07 09:36:19
dashboard_local_celerybeat | Configuration ->
dashboard_local_celerybeat | . broker -> redis://redis:6379/0
dashboard_local_celerybeat | . loader -> celery.loaders.app.AppLoader
dashboard_local_celerybeat | . scheduler -> django_celery_beat.schedulers.DatabaseScheduler
dashboard_local_celerybeat |
dashboard_local_celerybeat | . logfile -> [stderr]#%INFO
dashboard_local_celerybeat | . maxinterval -> 5.00 seconds (5s)
dashboard_local_celerybeat | [2022-07-07 09:36:19,658: INFO/MainProcess] beat: Starting...
dashboard_local_celeryworker | /usr/local/lib/python3.9/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
dashboard_local_celeryworker | absolutely not recommended!
dashboard_local_celeryworker |
dashboard_local_celeryworker | Please specify a different user using the --uid option.
dashboard_local_celeryworker |
dashboard_local_celeryworker | User information: uid=0 euid=0 gid=0 egid=0
dashboard_local_celeryworker |
dashboard_local_celeryworker | warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
dashboard_local_celeryworker |
dashboard_local_celeryworker | -------------- celery#e1ac9f770cbd v5.2.7 (dawn-chorus)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -- ******* ---- Linux-5.4.0-96-generic-x86_64-with-glibc2.31 2022-07-07 09:36:19
dashboard_local_celeryworker | - *** --- * ---
dashboard_local_celeryworker | - ** ---------- [config]
dashboard_local_celeryworker | - ** ---------- .> app: dashboard:0x7fd9dcaeb1c0
dashboard_local_celeryworker | - ** ---------- .> transport: redis://redis:6379/0
dashboard_local_celeryworker | - ** ---------- .> results: redis://redis:6379/0
dashboard_local_celeryworker | - *** --- * --- .> concurrency: 8 (prefork)
dashboard_local_celeryworker | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -------------- [queues]
dashboard_local_celeryworker | .> celery exchange=celery(direct) key=celery
dashboard_local_celeryworker |
dashboard_local_celeryworker |
dashboard_local_celeryworker | [tasks]
dashboard_local_celeryworker | . dashboard.users.tasks.get_users_count
dashboard_local_celeryworker |
dashboard_local_django | Operations to perform:
dashboard_local_django | Apply all migrations: account, admin, auth, authtoken, contenttypes, django_celery_beat, sessions, sites, socialaccount, users
dashboard_local_django | Running migrations:
dashboard_local_django | No migrations to apply.
dashboard_local_flower | INFO 2022-07-07 09:36:20,646 command 7 140098896897856 Visit me at http://localhost:5555
dashboard_local_flower | INFO 2022-07-07 09:36:20,652 command 7 140098896897856 Broker: redis://redis:6379/0
dashboard_local_flower | INFO 2022-07-07 09:36:20,655 command 7 140098896897856 Registered tasks:
dashboard_local_flower | ['celery.accumulate',
dashboard_local_flower | 'celery.backend_cleanup',
dashboard_local_flower | 'celery.chain',
dashboard_local_flower | 'celery.chord',
dashboard_local_flower | 'celery.chord_unlock',
dashboard_local_flower | 'celery.chunks',
dashboard_local_flower | 'celery.group',
dashboard_local_flower | 'celery.map',
dashboard_local_flower | 'celery.starmap',
dashboard_local_flower | 'dashboard.users.tasks.get_users_count']
dashboard_local_flower | INFO 2022-07-07 09:36:20,663 mixins 7 140098817644288 Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,792: INFO/SpawnProcess-1] Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,794: INFO/SpawnProcess-1] mingle: searching for neighbors
dashboard_local_flower | WARNING 2022-07-07 09:36:21,700 inspector 7 140098800826112 Inspect method active_queues failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,710 inspector 7 140098766993152 Inspect method reserved failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,712 inspector 7 140098784040704 Inspect method scheduled failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,714 inspector 7 140098758600448 Inspect method revoked failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098792433408 Inspect method registered failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098276423424 Inspect method conf failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098809218816 Inspect method stats failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,716 inspector 7 140098775648000 Inspect method active failed
dashboard_local_celeryworker | [2022-07-07 09:36:21,802: INFO/SpawnProcess-1] mingle: all alone
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: WARNING/SpawnProcess-1] /usr/local/lib/python3.9/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
dashboard_local_celeryworker | leak, never use this setting in production environments!
dashboard_local_celeryworker | warnings.warn('''Using settings.DEBUG leads to a memory
dashboard_local_celeryworker |
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: INFO/SpawnProcess-1] celery#e1ac9f770cbd ready.
dashboard_local_django | Watching for file changes with StatReloader
dashboard_local_django | INFO 2022-07-07 09:36:22,862 autoreload 9 140631340287808 Watching for file changes with StatReloader
dashboard_local_django | Performing system checks...
dashboard_local_django |
dashboard_local_django | System check identified no issues (0 silenced).
dashboard_local_django | July 07, 2022 - 09:36:23
dashboard_local_django | Django version 3.2.14, using settings 'config.settings.local'
dashboard_local_django | Starting development server at http://0.0.0.0:8000/
dashboard_local_django | Quit the server with CONTROL-C.
dashboard_local_celeryworker | [2022-07-07 09:36:25,661: INFO/SpawnProcess-1] Events of group {task} enabled by remote.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69591187e44d dashboard_local_flower "/entrypoint /start-…" 11 minutes ago Up 2 minutes 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp dashboard_local_flower
15914b6b91e0 dashboard_local_celerybeat "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celerybeat
e1ac9f770cbd dashboard_local_celeryworker "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celeryworker
6bbfc900c346 dashboard_local_django "/entrypoint /start" 11 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp dashboard_local_django
b8bec3422bae redis:6 "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 6379/tcp dashboard_local_redis
2b7c3d9eabe3 dashboard_production_postgres "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 5432/tcp dashboard_local_postgres
0249aaaa040c mailhog/mailhog:v1.0.0 "MailHog" 11 minutes ago Up 2 minutes 1025/tcp, 0.0.0.0:8025->8025/tcp, :::8025->8025/tcp dashboard_local_mailhog
d5dd94cbb070 dashboard_local_docs "/start-docs" 11 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp dashboard_local_docs
the ports are listening
telnet 127.0.0.1 8000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^]
% sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 127.0.0.1:43979 0.0.0.0:* LISTEN 31867/BlastServer
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 963/rpcbind
tcp 0 0 0.0.0.0:46641 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:51857 0.0.0.0:* LISTEN 4149/rpc.statd
tcp 0 0 0.0.0.0:5555 0.0.0.0:* LISTEN 14326/docker-proxy
tcp 0 0 0.0.0.0:6100 0.0.0.0:* LISTEN 31908/Xorg
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 973/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 29295/sshd
tcp 0 0 0.0.0.0:8025 0.0.0.0:* LISTEN 13769/docker-proxy
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 30117/master
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 882/sshd: noakes#no
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 14272/docker-proxy
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 13850/docker-proxy
tcp6 0 0 :::139 :::* LISTEN 29532/smbd
tcp6 0 0 :::40717 :::* LISTEN -
tcp6 0 0 :::41423 :::* LISTEN 4149/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 963/rpcbind
tcp6 0 0 127.0.0.1:41265 :::* LISTEN 30056/java
tcp6 0 0 :::5555 :::* LISTEN 14333/docker-proxy
tcp6 0 0 :::6100 :::* LISTEN 31908/Xorg
tcp6 0 0 :::22 :::* LISTEN 29295/sshd
tcp6 0 0 :::13782 :::* LISTEN 2201/xinetd
tcp6 0 0 :::13783 :::* LISTEN 2201/xinetd
tcp6 0 0 :::8025 :::* LISTEN 13779/docker-proxy
tcp6 0 0 ::1:25 :::* LISTEN 30117/master
tcp6 0 0 ::1:6010 :::* LISTEN 882/sshd: noakes#no
tcp6 0 0 :::13722 :::* LISTEN 2201/xinetd
tcp6 0 0 :::6556 :::* LISTEN 2201/xinetd
tcp6 0 0 :::445 :::* LISTEN 29532/smbd
tcp6 0 0 :::8000 :::* LISTEN 14278/docker-proxy
tcp6 0 0 :::1057 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7778 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7779 :::* LISTEN 2201/xinetd
tcp6 0 0 :::9000 :::* LISTEN 13860/docker-proxy
local.yml
version: '3'
volumes:
dashboard_local_postgres_data: {}
dashboard_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
#user: "root:root"
image: dashboard_local_django
container_name: dashboard_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: dashboard_production_postgres
container_name: dashboard_local_postgres
volumes:
- dashboard_local_postgres_data:/var/lib/postgresql/data:Z
- dashboard_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: dashboard_local_docs
container_name: dashboard_local_docs
platform: linux/x86_64
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./dashboard:/app/dashboard:z
ports:
- "9000:9000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: dashboard_local_mailhog
ports:
- "8025:8025"
redis:
image: redis:6
container_name: dashboard_local_redis
celeryworker:
<<: *django
image: dashboard_local_celeryworker
container_name: dashboard_local_celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: dashboard_local_celerybeat
container_name: dashboard_local_celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: dashboard_local_flower
container_name: dashboard_local_flower
ports:
- "5555:5555"
command: /start-flower

Failing when running celery from docker

I'm trying to get celery to work with Django and docker and the building works well but I celery won't run. Any ideas?
Here is are the docker-compose logs -f errors
Starting django-celery_redis_1 ... done
Starting django-celery_db_1 ... done
Starting django-celery_flower_1 ... done
Starting django-celery_celery_beat_1 ... done
Starting django-celery_celery_worker_1 ... done
Starting django-celery_web_1 ... done
Attaching to django-celery_db_1, django-celery_redis_1, django-celery_celery_worker_1, django-celery_flower_1, django-celery_celery_beat_1, django-celery_web_1
celery_beat_1 | standard_init_linux.go:219: exec user process caused: exec format error
db_1 | 2021-03-28 18:18:15.611 UTC [1] LOG: starting PostgreSQL 12.0 on x86_64-pc-linux-musl, compiled by gcc (Alpine 8.3.0) 8.3.0, 64-bit
db_1 | 2021-03-28 18:18:15.613 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2021-03-28 18:18:15.616 UTC [1] LOG: listening on IPv6 address "::", port 5432
celery_worker_1 | standard_init_linux.go:219: exec user process caused: exec format error
db_1 | 2021-03-28 18:18:15.648 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
redis_1 | 1:C 28 Mar 2021 18:18:15.425 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 28 Mar 2021 18:18:15.425 # Redis version=5.0.12, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 28 Mar 2021 18:18:15.425 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 28 Mar 2021 18:18:15.427 * Running mode=standalone, port=6379.
redis_1 | 1:M 28 Mar 2021 18:18:15.427 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 28 Mar 2021 18:18:15.427 # Server initialized
redis_1 | 1:M 28 Mar 2021 18:18:15.427 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 28 Mar 2021 18:18:15.428 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 28 Mar 2021 18:18:15.428 * Ready to accept connections
flower_1 | standard_init_linux.go:219: exec user process caused: exec format error
web_1 | standard_init_linux.go:219: exec user process caused: exec format error
db_1 | 2021-03-28 18:18:15.777 UTC [19] LOG: database system was shut down at 2021-03-28 18:16:52 UTC
db_1 | 2021-03-28 18:18:15.791 UTC [1] LOG: database system is ready to accept connections
django-celery_celery_worker_1 exited with code 1
django-celery_flower_1 exited with code 1
django-celery_celery_beat_1 exited with code 1
django-celery_web_1 exited with code 1
UPDATED: added docker-compose.yml file for better references on the stack problem. The build succeeds but when running docker-compose up it doesn't and throws the celery error
docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_web
# '/start' is the shell script used to run the service
command: /start
# this volume is used to map the files and folders on the host to the container
# so if we change code on the host, code in the docker container will also be changed
volumes:
- .:/app
ports:
- 8010:8000
# env_file is used to manage the env variables of our project
env_file:
- ./.env/.dev-sample
depends_on:
- redis
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=hello_django_dev
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
redis:
image: redis:5-alpine
celery_worker:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_celery_worker
command: /start-celeryworker
volumes:
- .:/app
env_file:
- ./.env/.dev-sample
depends_on:
- redis
- db
celery_beat:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_celery_beat
command: /start-celerybeat
volumes:
- .:/app
env_file:
- ./.env/.dev-sample
depends_on:
- redis
- db
flower:
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: django_celery_example_celey_flower
command: /start-flower
volumes:
- .:/app
env_file:
- ./.env/.dev-sample
ports:
- 5557:5555
depends_on:
- redis
- db
volumes:
postgres_data:
I'm not sure it applies to your case, but in docker i start celery with this command:
command: celery -A my_proj worker -l DEBUG
Since the error it gives you is "exec format error" it might just be this.

AWS CodeDeploy script exited with code 127

This is my first time using AWS CodeDeploy and I'm having problems creating my appspec.yml file.
This is the error I'm getting:
2019-02-16 19:28:06 ERROR [codedeploy-agent(3596)]:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Error during perform:
InstanceAgent::Plugins::CodeDeployPlugin::ScriptError -
Script at specified location: deploy_scripts/install_project_dependencies
run as user root failed with exit code 127 -
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:183:in `execute_script'
This is my appspec.yml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/admin_panel_backend
hooks:
BeforeInstall:
- location: deploy_scripts/install_dependencies
timeout: 300
runas: root
- location: deploy_scripts/start_server
timeout: 300
runas: root
AfterInstall:
- location: deploy_scripts/install_project_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: deploy_scripts/stop_server
timeout: 300
runas: root
And this is my project structure
drwxr-xr-x 7 501 20 224 Feb 6 20:57 api
-rw-r--r-- 1 501 20 501 Feb 16 16:29 appspec.yml
-rw-r--r-- 1 501 20 487 Feb 14 21:54 bitbucket-pipelines.yml
-rw-r--r-- 1 501 20 3716 Feb 14 20:43 codedeploy_deploy.py
drwxr-xr-x 4 501 20 128 Feb 6 20:57 config
-rw-r--r-- 1 501 20 1047 Feb 4 22:56 config.yml
drwxr-xr-x 6 501 20 192 Feb 16 16:25 deploy_scripts
drwxr-xr-x 264 501 20 8448 Feb 6 17:40 node_modules
-rw-r--r-- 1 501 20 101215 Feb 6 20:57 package-lock.json
-rw-r--r-- 1 501 20 580 Feb 6 20:57 package.json
-rw-r--r-- 1 501 20 506 Feb 4 08:50 server.js
And deploy_scripts folder
-rwxr--r-- 1 501 20 50 Feb 14 22:54 install_dependencies
-rwxr--r-- 1 501 20 61 Feb 16 16:25 install_project_dependencies
-rwxr--r-- 1 501 20 32 Feb 14 22:44 start_server
-rwxr--r-- 1 501 20 31 Feb 14 22:44 stop_server
This is my install_project_dependencies script
#!/bin/bash
cd /var/www/html/admin_panel_backend
npm install
All the other scripts are working ok, but this one (install_project_dependencies).
Thanks you all
After reading a lot! I realized I was having the same problem as NPM issue deploying a nodejs instance using AWS codedeploy , I didn't have my PATH variable set.
So leaving my start_script as this worked fine!
#!/bin/bash
source /root/.bash_profile
cd /var/www/html/admin_panel_backend
npm install
Thanks!
I had the exact same problem because npm was installed for EC2-user and not for root. I solved it by adding this line to my install_dependencies script.
su - ec2-user -c 'cd /usr/local/nginx/html/node && npm install'
You can replace your npm install line with the line above to install as your user.

Failed to start Kibana on AWS machine

I'm following blog post about using ELK stack. Machine for instalation is an amazon small Ubuntu instance.
I got to the point when I need to install Kibana service so I run:
sudo apt-get install kibana
Then I changed in /etc/kibana/kibana.yml
server.port: 5601
elasticsearch.url: "0.0.0.0:9200"
since I can get response from elasticsearch sudo curl 0.0.0.0:9200
then I run
sudo service kibana start
And after running sudo service kibana status I receiving:
x#ip-xx-xx-xx-xx:/$ sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2016-12-02 13:52:55 UTC; 13ms ago
Main PID: 5921 (node)
Tasks: 6
Memory: 1.1M
CPU: 3ms
CGroup: /system.slice/kibana.service
└─5921 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
Dec 02 13:52:55 ip-xx-xx-xx-xx systemd[1]: Started Kibana.
x#ip-xx-xx-xx-xx:/$ sudo service kibana status
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Dec 02 13:52:56 ip-xx-xx-xx-xx kibana[5921]: buildSha: '8f2ace746d1b84702bb618308efa65dc0c3f8a34' },
Dec 02 13:52:56 ip-xx-xx-xx-xx kibana[5921]: dev: { basePathProxyTarget: 5603 },
Dec 02 13:52:56 ip-xx-xx-xx-xx kibana[5921]: pid: { exclusive: false },
Dec 02 13:52:56 ip-xx-xx-xx-xx systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 13:52:56 ip-xx-xx-xx-xx systemd[1]: kibana.service: Unit entered failed state.
Dec 02 13:52:56 ip-xx-xx-xx-xx systemd[1]: kibana.service: Failed with result 'exit-code'.
Dec 02 13:52:57 ip-xx-xx-xx-xx systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Dec 02 13:52:57 ip-xx-xx-xx-xx systemd[1]: Stopped Kibana.
Dec 02 13:52:57 ip-xx-xx-xx-xx systemd[1]: kibana.service: Start request repeated too quickly.
Dec 02 13:52:57 ip-xx-xx-xx-xx systemd[1]: Failed to start Kibana.
Unfortunatelly log is not created under directory /var/log/kibanaeven after setting rights by chown kibana:kibana /var/log/kibana:
ll /var/log/kibana/
total 8
drwxr-xr-x 2 kibana kibana 4096 Dec 2 10:20 ./
drwxrwxr-x 9 root syslog 4096 Dec 2 09:50 ../
First of all I wish to see Kibana log (whole problem resolution will be even better :) )

AWS SES: Stuck in sandbox mode

I was ready to use SES for production so I had my sending limits increased. This is the email from AWS:
"Congratulations! After reviewing your case, we have increased your sending quota to 50,000 messages per day and your maximum send rate to 14 messages per second in AWS Region US East (N. Virginia). Your account has also been moved out of the sandbox, so you no longer need to verify recipient addresses."
I configured sSMTP so I can send email using the mail command, using AWS endpoint and generated SMTP credentials. I send an email and I get this:
"Oct 17 14:08:10 ia sSMTP[20486]: 554 Message rejected: Email address is not verified. The following identities failed the check in region US-EAST-1: root , root#ia.internal.vdopia.com"
The SMTP endpoint is: email-smtp.us-east-1.amazonaws.com:587
What am I doing wrong?
Updated
Output of syslog:
Output of syslog:
Oct 19 07:29:44 ia sSMTP[427]: Creating SSL connection to host
Oct 19 07:29:44 ia sSMTP[427]: 220 email-smtp.amazonaws.com ESMTP SimpleEmailService-1652178317 ANxvvoY79LhkdX5l8cYI
Oct 19 07:29:44 ia sSMTP[427]: EHLO ia.internal.vdopia.com
Oct 19 07:29:44 ia sSMTP[427]: 250 Ok
Oct 19 07:29:44 ia sSMTP[427]: STARTTLS
Oct 19 07:29:44 ia sSMTP[427]: 220 Ready to start TLS
Oct 19 07:29:44 ia sSMTP[427]: SSL connection using RSA_AES_128_CBC_SHA1
Oct 19 07:29:44 ia sSMTP[427]: EHLO ia.internal.vdopia.com
Oct 19 07:29:44 ia sSMTP[427]: 250 Ok
Oct 19 07:29:44 ia sSMTP[427]: AUTH LOGIN
--- removing some lines
Oct 19 07:29:44 ia sSMTP[427]: 235 Authentication successful.
Oct 19 07:29:44 ia sSMTP[427]: MAIL FROM: <root#ia.internal.vdopia.com>
Oct 19 07:29:44 ia sSMTP[427]: 250 Ok
Oct 19 07:29:44 ia sSMTP[427]: RCPT TO:<ayush.sharma#vdopia.com>
Oct 19 07:29:44 ia sSMTP[427]: 250 Ok
Oct 19 07:29:44 ia sSMTP[427]: DATA
Oct 19 07:29:44 ia sSMTP[427]: 354 End data with <CR><LF>.<CR><LF>
Oct 19 07:29:44 ia sSMTP[427]: Received: by ia.internal.vdopia.com (sSMTP sendmail emulation); Wed, 19 Oct 2016 07:29:44 +0000
Oct 19 07:29:44 ia sSMTP[427]: From: "root" <root#ia.internal.vdopia.com>
Oct 19 07:29:44 ia sSMTP[427]: Date: Wed, 19 Oct 2016 07:29:44 +0000
Oct 19 07:29:44 ia sSMTP[427]: Subject: testing
Oct 19 07:29:44 ia sSMTP[427]: To: <ayush.sharma#vdopia.com>
Oct 19 07:29:44 ia sSMTP[427]: X-Mailer: mail (GNU Mailutils 2.99.98)
Oct 19 07:29:44 ia sSMTP[427]:
Oct 19 07:29:45 ia sSMTP[427]: .
Oct 19 07:29:45 ia sSMTP[427]: 554 Message rejected: Email address is not verified. The following identities failed the check in region US-EAST-1: root <root#ia.internal.vdopia.com>, root#ia.internal.vdopia.com
SMTP Response Codes Returned by Amazon SES
When your account is moved out of sand box, you do not have to verify the recipients' addresses. But you still have to verify the sender's address or domain. From your post, it appears you have not verified the sender's address. Remember to verify the address/domain that appears in:
From
Source
Sender / Return-Path
Can you post your actual mail command/script that you are using to send the mail?
Nevertheless this question is answered correctly, maybe this tip helps out other people. After the activation and switch from AWS SES Sandbox to Production mode (support request of limit increase) I realized, that using the old "SMTP IAM User" caused the same problem. Just create a new "SMTP IAM User" after production grant. I really cannot explain it but that worked several times now.