I can't open Django admin on browser after I tried apache - django

So, I am working with aws ec2. I have two projects going on. one is Django server which I use docker-compose to run, another is Yourls URL shortener and it uses Apache. I used to run Django and had domain point to it so that I can see it on chrome. But after running Yourls once and even deleting its docker image, I still cannot see django admin page on browser.
it is showing this.
enter image description here
I have deleted all docker images to make sure nothing else is intefering. But still the same.
ps aux result is as follows:
Proto RefCnt Flags Type State I-Node Path
unix 3 [ ] STREAM CONNECTED 15681 /run/systemd/journal/stdout
unix 3 [ ] STREAM CONNECTED 23538 /run/containerd/containerd.sock.ttrpc
unix 3 [ ] STREAM CONNECTED 22661
unix 2 [ ] DGRAM 17831
unix 3 [ ] STREAM CONNECTED 22633 /run/docker.sock
unix 3 [ ] STREAM CONNECTED 24310 /run/containerd/containerd.sock.ttrpc
unix 3 [ ] STREAM CONNECTED 18192 /run/systemd/journal/stdout
unix 3 [ ] STREAM CONNECTED 22664
unix 3 [ ] STREAM CONNECTED 21460
unix 3 [ ] STREAM CONNECTED 24045
unix 3 [ ] STREAM CONNECTED 21430 /run/systemd/journal/stdout
unix 3 [ ] STREAM CONNECTED 17792
unix 3 [ ] STREAM CONNECTED 23914 /run/containerd/containerd.sock.ttrpc
unix 3 [ ] STREAM CONNECTED 22656 /run/docker.sock
unix 3 [ ] STREAM CONNECTED 19504 /var/lib/amazon/ssm/ipc/health
unix 3 [ ] STREAM CONNECTED 22749
unix 2 [ ] DGRAM 17852
unix 3 [ ] STREAM CONNECTED 22672 /run/docker.sock
unix 3 [ ] STREAM CONNECTED 21429
unix 3 [ ] STREAM CONNECTED 22632
unix 2 [ ] DGRAM 19450
unix 3 [ ] STREAM CONNECTED 23532 /run/containerd/containerd.sock.ttrpc
unix 3 [ ] STREAM CONNECTED 20893
unix 3 [ ] STREAM CONNECTED 17795
unix 3 [ ] STREAM CONNECTED 23645 /run/containerd/s/d64571e53c1a71f42e23d70f2d650c7512082d2666fa4a5eef4ed754a6fe0826
unix 3 [ ] STREAM CONNECTED 22662 /run/docker.sock
unix 3 [ ] STREAM CONNECTED 23537
unix 3 [ ] STREAM CONNECTED 22665 /run/docker.sock
unix 2 [ ] DGRAM 18863
unix 3 [ ] STREAM CONNECTED 26211
unix 3 [ ] STREAM CONNECTED 18191
unix 3 [ ] STREAM CONNECTED 21207 /run/systemd/journal/stdout
unix 3 [ ] STREAM CONNECTED 18355
unix 3 [ ] STREAM CONNECTED 23913
unix 3 [ ] STREAM CONNECTED 21461 /run/containerd/containerd.sock
unix 3 [ ] STREAM CONNECTED 17798
unix 3 [ ] STREAM CONNECTED 23531
unix 3 [ ] STREAM CONNECTED 22637
unix 3 [ ] STREAM CONNECTED 18835
unix 3 [ ] STREAM CONNECTED 19502
unix 3 [ ] STREAM CONNECTED 22668 /run/docker.sock
unix 3 [ ] STREAM CONNECTED 19500
unix 3 [ ] STREAM CONNECTED 24050 /run/containerd/s/50d5201de88336c967d319932c6dc4199570d91805f37ba98b9eaf7a18b7c0ca
unix 3 [ ] STREAM CONNECTED 22667
unix 3 [ ] STREAM CONNECTED 21457 /run/containerd/containerd.sock
unix 3 [ ] STREAM CONNECTED 26212
unix 3 [ ] STREAM CONNECTED 22658
unix 3 [ ] STREAM CONNECTED 21456
unix 3 [ ] STREAM CONNECTED 23640
unix 3 [ ] STREAM CONNECTED 18356 /run/systemd/journal/stdout
unix 3 [ ] STREAM CONNECTED 22659 /run/docker.sock
unix 2 [ ] DGRAM 21443
unix 3 [ ] STREAM CONNECTED 24309
unix 3 [ ] STREAM CONNECTED 22670
unix 3 [ ] STREAM CONNECTED 18836 /run/systemd/journal/stdout
unix 2 [ ] DGRAM 18623
unix 2 [ ] DGRAM 26206
unix 3 [ ] STREAM CONNECTED 17791
unix 3 [ ] STREAM CONNECTED 22655
unix 3 [ ] STREAM CONNECTED 20892
unix 3 [ ] STREAM CONNECTED 17794
unix 3 [ ] STREAM CONNECTED 19501 /var/lib/amazon/ssm/ipc/termination
unix 3 [ ] STREAM CONNECTED 17797
unix 2 [ ] DGRAM 20887
unix 3 [ ] STREAM CONNECTED 21206
unix 3 [ ] STREAM CONNECTED 22641 /run/docker.sock
[ec2-user#ip-172-31-3-230 ~]$ sudo netstat -tuln | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
[ec2-user#ip-172-31-3-230 ~]$ sudo netstat -tuln | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
[ec2-user#ip-172-31-3-230 ~]$ sudo lsof -i :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 4755 root 4u IPv4 24032 0t0 TCP *:http (LISTEN)
[ec2-user#ip-172-31-3-230 ~]$ how can I turn off ps aux
-bash: how: command not found
[ec2-user#ip-172-31-3-230 ~]$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.5 41660 5292 ? Ss 01:45 0:01 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root 1715 0.0 0.0 0 0 ? S 01:45 0:00 [xfsaild/xvda1]
root 1774 0.0 1.9 52348 19192 ? Ss 01:45 0:00 /usr/lib/systemd/systemd-journald
root 1791 0.0 0.2 116756 2168 ? Ss 01:45 0:00 /usr/sbin/lvmetad -f
root 2429 0.0 0.4 46460 4056 ? Ss 01:45 0:00 /usr/lib/systemd/systemd-udevd
root 2433 0.0 0.0 0 0 ? I< 01:45 0:00 [ena]
root 2481 0.0 0.0 0 0 ? I< 01:45 0:00 [cryptd]
root 2575 0.0 0.0 0 0 ? I< 01:45 0:00 [rpciod]
root 2576 0.0 0.0 0 0 ? I< 01:45 0:00 [xprtiod]
root 2579 0.0 0.2 59748 2168 ? S<sl 01:45 0:00 /sbin/auditd
dbus 2602 0.0 0.4 58272 4088 ? Ss 01:45 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
libstor+ 2603 0.0 0.2 12632 1992 ? Ss 01:45 0:00 /usr/bin/lsmd -d
root 2609 0.0 0.3 28484 2984 ? Ss 01:45 0:00 /usr/lib/systemd/systemd-logind
rpc 2639 0.0 0.3 67280 3220 ? Ss 01:45 0:00 /sbin/rpcbind -w
root 2640 0.0 0.3 212004 3284 ? Ssl 01:45 0:00 /usr/sbin/gssproxy -D
rngd 2641 0.0 0.4 94100 4552 ? Ss 01:45 0:00 /sbin/rngd -f --fill-watermark=0 --exclude=jitter
chrony 2649 0.0 0.3 120360 3188 ? S 01:45 0:00 /usr/sbin/chronyd -F 2
root 2853 0.0 0.3 98684 3804 ? Ss 01:45 0:00 /sbin/dhclient -q -lf /var/lib/dhclient/dhclient--eth0.lease -pf /var/run/dhclient-eth0.pid -H ip-172-31-3-230 eth0
root 2902 0.0 0.4 98684 4180 ? Ss 01:45 0:00 /sbin/dhclient -6 -nw -lf /var/lib/dhclient/dhclient6--eth0.lease -pf /var/run/dhclient6-eth0.pid eth0 -H ip-172-31-3-230
root 3046 0.0 0.4 90364 4788 ? Ss 01:45 0:00 /usr/libexec/postfix/master -w
postfix 3047 0.0 0.6 90452 6652 ? S 01:45 0:00 pickup -l -t unix -u
postfix 3048 0.0 0.6 90528 6756 ? S 01:45 0:00 qmgr -l -t unix -u
root 3084 0.0 0.7 110852 7628 ? Ss 01:45 0:00 /usr/sbin/sshd -D
root 3085 0.0 0.9 270396 8916 ? Ssl 01:45 0:00 /usr/sbin/rsyslogd -n
root 3087 0.0 1.2 714420 12756 ? Ssl 01:45 0:00 /usr/bin/amazon-ssm-agent
root 3096 0.0 0.2 27896 2116 ? Ss 01:45 0:00 /usr/sbin/atd -f
root 3108 0.0 0.3 135092 3120 ? Ss 01:45 0:00 /usr/sbin/crond -n
root 3124 0.0 0.1 121312 1744 tty1 Ss+ 01:45 0:00 /sbin/agetty --noclear tty1 linux
root 3125 0.0 0.2 120960 2256 ttyS0 Ss+ 01:45 0:00 /sbin/agetty --keep-baud 115200,38400,9600 ttyS0 vt220
root 3147 0.0 2.0 723848 20176 ? Sl 01:45 0:00 /usr/bin/ssm-agent-worker
root 3163 0.0 0.0 4272 104 ? Ss 01:45 0:00 /usr/sbin/acpid
root 3222 0.0 0.0 0 0 ? I 02:00 0:00 [kworker/0:3-eve]
root 3248 0.0 0.8 150628 8380 ? Ss 02:04 0:00 sshd: ec2-user [priv]
ec2-user 3298 0.0 0.4 150628 4344 ? S 02:04 0:00 sshd: ec2-user#pts/0
ec2-user 3299 0.0 0.4 124872 4012 pts/0 Ss 02:04 0:00 -bash
root 3351 0.1 4.4 1400652 44076 ? Ssl 02:04 0:01 /usr/bin/containerd
root 3363 0.1 8.4 1503432 83088 ? Ssl 02:04 0:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit nofile=32768:65536
root 3385 0.0 0.0 4228 772 ? S 02:04 0:00 bpfilter_umh
ec2-user 4302 0.0 3.6 755756 35600 pts/0 Sl+ 02:05 0:00 docker-compose -f staging.yml up
root 4363 0.0 1.0 712212 10060 ? Sl 02:05 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 388fa081cda8e3e3f70b5de09399cbeb170d89cf586d15c1e9c82519a6348f4b -address /run/containerd/containerd.sock
root 4365 0.0 1.0 712212 10088 ? Sl 02:05 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c09b6c495e0a1a75a417ee26d223b55635ca52cb48d2dee61f0e3b5b9c398a7c -address /run/containerd/containerd.sock
libstor+ 4428 0.1 0.7 61092 7376 ? Ssl 02:05 0:02 redis-server *:6379
libstor+ 4439 0.0 2.7 213336 26720 ? Ss 02:05 0:00 postgres
root 4539 0.0 0.0 0 0 ? I 02:05 0:00 [kworker/u30:1-e]
root 4652 0.0 0.9 712212 9136 ? Sl 02:05 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c086faadf9116601082b45b0c85678b752c12c61e67bacba1940e727cbbda005 -address /run/containerd/containerd.sock
101 4676 0.0 0.3 5792 3160 ? Ss 02:05 0:00 /bin/bash /start
libstor+ 4702 0.0 0.8 213456 8036 ? Ss 02:05 0:00 postgres: checkpointer
libstor+ 4703 0.0 0.5 213336 5812 ? Ss 02:05 0:00 postgres: background writer
libstor+ 4704 0.0 1.0 213336 10000 ? Ss 02:05 0:00 postgres: walwriter
libstor+ 4705 0.0 0.8 213872 8408 ? Ss 02:05 0:00 postgres: autovacuum launcher
libstor+ 4706 0.0 0.4 68064 4856 ? Ss 02:05 0:00 postgres: stats collector
libstor+ 4707 0.0 0.6 213772 6560 ? Ss 02:05 0:00 postgres: logical replication launcher
root 4744 0.0 0.2 1012672 2800 ? Sl 02:05 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.19.0.3 -container-port 443
root 4755 0.0 0.3 1164232 3092 ? Sl 02:05 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.3 -container-port 80
root 4768 0.0 0.9 712212 9448 ? Sl 02:05 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4d52fc467adaa8da1ab5305c3e77d91494d3495fc3e9647c72a7364eec1071be -address /run/containerd/containerd.sock
root 4791 0.0 5.3 773408 53000 ? Ssl 02:05 0:01 traefik traefik
101 4847 0.0 2.8 32736 28596 ? S 02:05 0:00 /usr/local/bin/python /usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
101 4848 0.0 7.5 88688 74948 ? S 02:05 0:01 /usr/local/bin/python /usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
101 4849 0.0 7.5 88600 74980 ? S 02:05 0:01 /usr/local/bin/python /usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
101 4850 0.0 7.5 88640 74952 ? S 02:05 0:01 /usr/local/bin/python /usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
101 4851 0.0 7.5 88668 74912 ? S 02:05 0:01 /usr/local/bin/python /usr/local/bin/gunicorn config.wsgi --bind 0.0.0.0:5000 --chdir=/app
root 4961 0.0 0.8 150656 8380 ? Ss 02:21 0:00 sshd: ec2-user [priv]
ec2-user 5011 0.0 0.3 150656 3548 ? S 02:21 0:00 sshd: ec2-user#pts/1
ec2-user 5012 0.0 0.4 124872 4076 pts/1 Ss 02:21 0:00 -bash
root 5067 0.0 0.0 0 0 ? I 02:27 0:00 [kworker/0:0-xfs]
ec2-user 5093 0.0 0.3 162292 3920 pts/1 R+ 02:34 0:00 ps aux
All docker containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d52fc467ada openformat_staging_traefik "/entrypoint.sh trae…" 2 days ago Up 30 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp server-traefik-1
c086faadf911 openformat_staging_django "/entrypoint /start" 2 days ago Up 30 minutes server-django-1
c09b6c495e0a openformat_staging_postgres "docker-entrypoint.s…" 3 days ago Up 30 minutes 5432/tcp server-postgres-1
388fa081cda8 redis:6 "docker-entrypoint.s…" 11 days ago Up 30 minutes 6379/tcp server-redis-1
What am doing wrong? How can I fix this?

Related

Unable to connect browser to any of my docker images

I downloaded cookiecutter django to start a new project the other day. I spun it up (along with postgres, redis, etc) inside docker containers. The configuration files should be fine because they were all generated by coockicutter.
However, once I build and turn on the containers I am unable to see the "hello world" splash page when I connect to my localhost:8000. But there is something going wrong between the applications and the containers because I am able to connect to them via telnet and through docker exec -it commands etc. The only thing I can think of is some sort of permissions issue? So I gave all the files/directors 777 permissions to test that but that hasnt changed anything.
logs
% docker compose -f local.yml up
[+] Running 8/0
⠿ Container dashboard_local_docs Created 0.0s
⠿ Container dashboard_local_redis Created 0.0s
⠿ Container dashboard_local_mailhog Created 0.0s
⠿ Container dashboard_local_postgres Created 0.0s
⠿ Container dashboard_local_django Created 0.0s
⠿ Container dashboard_local_celeryworker Created 0.0s
⠿ Container dashboard_local_celerybeat Created 0.0s
⠿ Container dashboard_local_flower Created 0.0s
Attaching to dashboard_local_celerybeat, dashboard_local_celeryworker, dashboard_local_django, dashboard_local_docs, dashboard_local_flower, dashboard_local_mailhog, dashboard_local_postgres, dashboard_local_redis
dashboard_local_postgres |
dashboard_local_postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization
dashboard_local_postgres |
dashboard_local_postgres | 2022-07-07 14:36:15.969 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv6 address "::", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
dashboard_local_postgres | 2022-07-07 14:36:15.999 UTC [26] LOG: database system was shut down at 2022-07-07 14:35:47 UTC
dashboard_local_postgres | 2022-07-07 14:36:16.004 UTC [1] LOG: database system is ready to accept connections
dashboard_local_mailhog | 2022/07/07 14:36:16 Using in-memory storage
dashboard_local_mailhog | 2022/07/07 14:36:16 [SMTP] Binding to address: 0.0.0.0:1025
dashboard_local_mailhog | 2022/07/07 14:36:16 Serving under http://0.0.0.0:8025/
dashboard_local_mailhog | [HTTP] Binding to address: 0.0.0.0:8025
dashboard_local_mailhog | Creating API v1 with WebPath:
dashboard_local_mailhog | Creating API v2 with WebPath:
dashboard_local_docs | sphinx-autobuild -b html --host 0.0.0.0 --port 9000 --watch /app -c . . ./_build/html
dashboard_local_docs | [sphinx-autobuild] > sphinx-build -b html -c . /docs /docs/_build/html
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * monotonic clock: POSIX clock_gettime
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * Running mode=standalone, port=6379.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # Server initialized
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Loading RDB produced by version 6.2.7
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB age 30 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB memory usage when created 0.78 Mb
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 # Done loading RDB, keys loaded: 3, keys expired: 0.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * DB loaded from disk: 0.000 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Ready to accept connections
dashboard_local_docs | Running Sphinx v5.0.1
dashboard_local_celeryworker | PostgreSQL is available
dashboard_local_celerybeat | PostgreSQL is available
dashboard_local_docs | loading pickled environment... done
dashboard_local_docs | building [mo]: targets for 0 po files that are out of date
dashboard_local_docs | building [html]: targets for 0 source files that are out of date
dashboard_local_docs | updating environment: 0 added, 0 changed, 0 removed
dashboard_local_docs | looking for now-outdated files... none found
dashboard_local_docs | no targets are out of date.
dashboard_local_docs | build succeeded.
dashboard_local_docs |
dashboard_local_docs | The HTML pages are in _build/html.
dashboard_local_docs | [I 220707 14:36:18 server:335] Serving on http://0.0.0.0:9000
dashboard_local_celeryworker | [14:36:18] watching "/app" and reloading "celery.__main__.main" on changes...
dashboard_local_docs | [I 220707 14:36:18 handlers:62] Start watching changes
dashboard_local_docs | [I 220707 14:36:18 handlers:64] Start detecting changes
dashboard_local_django | PostgreSQL is available
dashboard_local_celerybeat | celery beat v5.2.7 (dawn-chorus) is starting.
dashboard_local_flower | PostgreSQL is available
dashboard_local_celerybeat | __ - ... __ - _
dashboard_local_celerybeat | LocalTime -> 2022-07-07 09:36:19
dashboard_local_celerybeat | Configuration ->
dashboard_local_celerybeat | . broker -> redis://redis:6379/0
dashboard_local_celerybeat | . loader -> celery.loaders.app.AppLoader
dashboard_local_celerybeat | . scheduler -> django_celery_beat.schedulers.DatabaseScheduler
dashboard_local_celerybeat |
dashboard_local_celerybeat | . logfile -> [stderr]#%INFO
dashboard_local_celerybeat | . maxinterval -> 5.00 seconds (5s)
dashboard_local_celerybeat | [2022-07-07 09:36:19,658: INFO/MainProcess] beat: Starting...
dashboard_local_celeryworker | /usr/local/lib/python3.9/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
dashboard_local_celeryworker | absolutely not recommended!
dashboard_local_celeryworker |
dashboard_local_celeryworker | Please specify a different user using the --uid option.
dashboard_local_celeryworker |
dashboard_local_celeryworker | User information: uid=0 euid=0 gid=0 egid=0
dashboard_local_celeryworker |
dashboard_local_celeryworker | warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
dashboard_local_celeryworker |
dashboard_local_celeryworker | -------------- celery#e1ac9f770cbd v5.2.7 (dawn-chorus)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -- ******* ---- Linux-5.4.0-96-generic-x86_64-with-glibc2.31 2022-07-07 09:36:19
dashboard_local_celeryworker | - *** --- * ---
dashboard_local_celeryworker | - ** ---------- [config]
dashboard_local_celeryworker | - ** ---------- .> app: dashboard:0x7fd9dcaeb1c0
dashboard_local_celeryworker | - ** ---------- .> transport: redis://redis:6379/0
dashboard_local_celeryworker | - ** ---------- .> results: redis://redis:6379/0
dashboard_local_celeryworker | - *** --- * --- .> concurrency: 8 (prefork)
dashboard_local_celeryworker | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -------------- [queues]
dashboard_local_celeryworker | .> celery exchange=celery(direct) key=celery
dashboard_local_celeryworker |
dashboard_local_celeryworker |
dashboard_local_celeryworker | [tasks]
dashboard_local_celeryworker | . dashboard.users.tasks.get_users_count
dashboard_local_celeryworker |
dashboard_local_django | Operations to perform:
dashboard_local_django | Apply all migrations: account, admin, auth, authtoken, contenttypes, django_celery_beat, sessions, sites, socialaccount, users
dashboard_local_django | Running migrations:
dashboard_local_django | No migrations to apply.
dashboard_local_flower | INFO 2022-07-07 09:36:20,646 command 7 140098896897856 Visit me at http://localhost:5555
dashboard_local_flower | INFO 2022-07-07 09:36:20,652 command 7 140098896897856 Broker: redis://redis:6379/0
dashboard_local_flower | INFO 2022-07-07 09:36:20,655 command 7 140098896897856 Registered tasks:
dashboard_local_flower | ['celery.accumulate',
dashboard_local_flower | 'celery.backend_cleanup',
dashboard_local_flower | 'celery.chain',
dashboard_local_flower | 'celery.chord',
dashboard_local_flower | 'celery.chord_unlock',
dashboard_local_flower | 'celery.chunks',
dashboard_local_flower | 'celery.group',
dashboard_local_flower | 'celery.map',
dashboard_local_flower | 'celery.starmap',
dashboard_local_flower | 'dashboard.users.tasks.get_users_count']
dashboard_local_flower | INFO 2022-07-07 09:36:20,663 mixins 7 140098817644288 Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,792: INFO/SpawnProcess-1] Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,794: INFO/SpawnProcess-1] mingle: searching for neighbors
dashboard_local_flower | WARNING 2022-07-07 09:36:21,700 inspector 7 140098800826112 Inspect method active_queues failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,710 inspector 7 140098766993152 Inspect method reserved failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,712 inspector 7 140098784040704 Inspect method scheduled failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,714 inspector 7 140098758600448 Inspect method revoked failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098792433408 Inspect method registered failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098276423424 Inspect method conf failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098809218816 Inspect method stats failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,716 inspector 7 140098775648000 Inspect method active failed
dashboard_local_celeryworker | [2022-07-07 09:36:21,802: INFO/SpawnProcess-1] mingle: all alone
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: WARNING/SpawnProcess-1] /usr/local/lib/python3.9/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
dashboard_local_celeryworker | leak, never use this setting in production environments!
dashboard_local_celeryworker | warnings.warn('''Using settings.DEBUG leads to a memory
dashboard_local_celeryworker |
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: INFO/SpawnProcess-1] celery#e1ac9f770cbd ready.
dashboard_local_django | Watching for file changes with StatReloader
dashboard_local_django | INFO 2022-07-07 09:36:22,862 autoreload 9 140631340287808 Watching for file changes with StatReloader
dashboard_local_django | Performing system checks...
dashboard_local_django |
dashboard_local_django | System check identified no issues (0 silenced).
dashboard_local_django | July 07, 2022 - 09:36:23
dashboard_local_django | Django version 3.2.14, using settings 'config.settings.local'
dashboard_local_django | Starting development server at http://0.0.0.0:8000/
dashboard_local_django | Quit the server with CONTROL-C.
dashboard_local_celeryworker | [2022-07-07 09:36:25,661: INFO/SpawnProcess-1] Events of group {task} enabled by remote.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69591187e44d dashboard_local_flower "/entrypoint /start-…" 11 minutes ago Up 2 minutes 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp dashboard_local_flower
15914b6b91e0 dashboard_local_celerybeat "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celerybeat
e1ac9f770cbd dashboard_local_celeryworker "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celeryworker
6bbfc900c346 dashboard_local_django "/entrypoint /start" 11 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp dashboard_local_django
b8bec3422bae redis:6 "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 6379/tcp dashboard_local_redis
2b7c3d9eabe3 dashboard_production_postgres "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 5432/tcp dashboard_local_postgres
0249aaaa040c mailhog/mailhog:v1.0.0 "MailHog" 11 minutes ago Up 2 minutes 1025/tcp, 0.0.0.0:8025->8025/tcp, :::8025->8025/tcp dashboard_local_mailhog
d5dd94cbb070 dashboard_local_docs "/start-docs" 11 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp dashboard_local_docs
the ports are listening
telnet 127.0.0.1 8000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^]
% sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 127.0.0.1:43979 0.0.0.0:* LISTEN 31867/BlastServer
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 963/rpcbind
tcp 0 0 0.0.0.0:46641 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:51857 0.0.0.0:* LISTEN 4149/rpc.statd
tcp 0 0 0.0.0.0:5555 0.0.0.0:* LISTEN 14326/docker-proxy
tcp 0 0 0.0.0.0:6100 0.0.0.0:* LISTEN 31908/Xorg
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 973/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 29295/sshd
tcp 0 0 0.0.0.0:8025 0.0.0.0:* LISTEN 13769/docker-proxy
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 30117/master
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 882/sshd: noakes#no
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 14272/docker-proxy
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 13850/docker-proxy
tcp6 0 0 :::139 :::* LISTEN 29532/smbd
tcp6 0 0 :::40717 :::* LISTEN -
tcp6 0 0 :::41423 :::* LISTEN 4149/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 963/rpcbind
tcp6 0 0 127.0.0.1:41265 :::* LISTEN 30056/java
tcp6 0 0 :::5555 :::* LISTEN 14333/docker-proxy
tcp6 0 0 :::6100 :::* LISTEN 31908/Xorg
tcp6 0 0 :::22 :::* LISTEN 29295/sshd
tcp6 0 0 :::13782 :::* LISTEN 2201/xinetd
tcp6 0 0 :::13783 :::* LISTEN 2201/xinetd
tcp6 0 0 :::8025 :::* LISTEN 13779/docker-proxy
tcp6 0 0 ::1:25 :::* LISTEN 30117/master
tcp6 0 0 ::1:6010 :::* LISTEN 882/sshd: noakes#no
tcp6 0 0 :::13722 :::* LISTEN 2201/xinetd
tcp6 0 0 :::6556 :::* LISTEN 2201/xinetd
tcp6 0 0 :::445 :::* LISTEN 29532/smbd
tcp6 0 0 :::8000 :::* LISTEN 14278/docker-proxy
tcp6 0 0 :::1057 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7778 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7779 :::* LISTEN 2201/xinetd
tcp6 0 0 :::9000 :::* LISTEN 13860/docker-proxy
local.yml
version: '3'
volumes:
dashboard_local_postgres_data: {}
dashboard_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
#user: "root:root"
image: dashboard_local_django
container_name: dashboard_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: dashboard_production_postgres
container_name: dashboard_local_postgres
volumes:
- dashboard_local_postgres_data:/var/lib/postgresql/data:Z
- dashboard_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: dashboard_local_docs
container_name: dashboard_local_docs
platform: linux/x86_64
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./dashboard:/app/dashboard:z
ports:
- "9000:9000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: dashboard_local_mailhog
ports:
- "8025:8025"
redis:
image: redis:6
container_name: dashboard_local_redis
celeryworker:
<<: *django
image: dashboard_local_celeryworker
container_name: dashboard_local_celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: dashboard_local_celerybeat
container_name: dashboard_local_celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: dashboard_local_flower
container_name: dashboard_local_flower
ports:
- "5555:5555"
command: /start-flower

GCP cloudbuild react-scripts build don't find env.file

I'm doing something I thought was simple:
# Fetch config
- name: 'gcr.io/cloud-builders/gsutil'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'cp', 'gs://servicesconfig/devs/react-app/env.server', '/persistent_volume/env.server' ]
# Install dependencies
- name: node:$_NODE_VERSION
entrypoint: 'yarn'
args: [ 'install' ]
# Build project
- name: node:$_NODE_VERSION
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- -c
- |
cp /persistent_volume/env.server .env.production &&
cat .env.production &&
ls -la &&
yarn run build:prod
while in my package.json:
"build:prod": "sh -ac '. .env.production; react-scripts build'",
All of this works well in local but the output in gcp cloud build:
Already have image: node:14
REACT_APP_ENV="sandbox"
REACT_APP_CAPTCHA_ENABLED=true
REACT_APP_CAPTCHA_PUBLIC_KEY="akey"
REACT_APP_DEFAULT_APP="home-btn"
REACT_APP_API_URL="akey2"
REACT_APP_STRIPE_KEY="akey3"
REACT_APP_COGNITO_POOL_ID="akey4"
REACT_APP_COGNITO_APP_ID="akey5"
total 2100
drwxr-xr-x 6 root root 4096 Feb 25 12:15 .
drwxr-xr-x 1 root root 4096 Feb 25 12:15 ..
-rw-r--r-- 1 root root 382 Feb 25 12:15 .env.production <- it's here!
drwxr-xr-x 8 root root 4096 Feb 25 12:13 .git
-rw-r--r-- 1 root root 230 Feb 25 12:13 .gitignore
-rw-r--r-- 1 root root 371 Feb 25 12:13 Dockerfile
-rw-r--r-- 1 root root 3787 Feb 25 12:13 README.md
-rw-r--r-- 1 root root 1019 Feb 25 12:13 cloudbuild.yaml
drwxr-xr-x 1089 root root 36864 Feb 25 12:14 node_modules
-rw-r--r-- 1 root root 1580131 Feb 25 12:13 package-lock.json
-rw-r--r-- 1 root root 1896 Feb 25 12:13 package.json
drwxr-xr-x 2 root root 4096 Feb 25 12:13 public
drwxr-xr-x 9 root root 4096 Feb 25 12:13 src
-rw-r--r-- 1 root root 535 Feb 25 12:13 tsconfig.json
-rw-r--r-- 1 root root 478836 Feb 25 12:13 yarn.lock
/workspace
yarn run v1.22.17
$ sh -ac '. .env.production; react-scripts build'
sh: 1: .: .env.production: not found
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I'm unsure if I'm doing something completely wrong or if it's a bug on GCP side?
Alright, I'm not expert enough into bash and zh documentation to understand what the issue is, but I ended up solving it.
One thing to pay attention to:
everything is actually shared between raw steps in cloudbuild, no need for a volume or any specific path
So on the cloudbuild side I changed the yaml to reflect:
- name: node:$_NODE_VERSION
entrypoint: 'bash'
args:
- -c
- |
mv env.server .env.production &&
yarn run build:prod
And on the package.json I'm now using an extra lib env-cmd
which changes the build command to:
"build:prod": "env-cmd -f .env.production react-scripts build",
this works like a charm.
I'm a bit annoyed I had to add another lib for this but, well.

How to properly configure my terraform folders for AWS deployment?

This is how my folder structure looks like
total 248
drwxrwxr-x 6 miki miki 4096 Mar 7 16:01 ./
drwxrwxr-x 5 miki miki 4096 Mar 3 14:53 ../
-rw-rw-r-- 1 miki miki 460 Mar 4 11:59 application_01.tf
drwxrwxr-x 3 miki miki 4096 Mar 8 10:54 application-server/
-rw-rw-r-- 1 miki miki 862 Mar 4 09:06 ecr.tf
-rw-rw-r-- 1 miki miki 3169 Mar 4 11:36 iam.tf
-rw-rw-r-- 1 miki miki 1023 Mar 4 14:11 jenkins_01.tf
drwxrwxr-x 2 miki miki 4096 Mar 7 15:33 jenkins-config/
-rw------- 1 miki miki 3401 Mar 3 09:41 jenkins.key
-r-------- 1 miki miki 753 Mar 3 09:41 jenkins.pem
drwxrwxr-x 3 miki miki 4096 Mar 8 10:53 jenkins-server/
I run yesterday both terraform init and terraform apply
I found out that my application-server folder content is not implemented.
I have script file(update ,install docker,login to ECR, and pull image)
sudo yum update -y
sudo amazon-linux-extras install docker
sudo systemctl start docker
sudo systemctl enable docker
/bin/sh -e -c 'echo $(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin ${repository_url}'
sudo docker pull ${repository_url}:release
sudo docker run -p 80:8000 ${repository_url}:release
Anyway I checked the instance from the console
I run
terraform plan
and this it says
No changes. Your infrastructure matches the configuration.
Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan:
terraform apply -refresh-only
My application.tf file
module "application-server" {
source = "./application-server"
ami-id = "ami-0742b4e673072066f" # AMI for an Amazon Linux instance for region: us-east-1
iam-instance-profile = aws_iam_instance_profile.simple-web-app.id
key-pair = aws_key_pair.simple-web-app-key.key_name
name = "Simple Web App"
device-index = 0
network-interface-id = aws_network_interface.simple-web-app.id
repository-url = aws_ecr_repository.simple-web-app.repository_url
}
And APPLICATION_SERVER folder
-rw-rw-r-- 1 miki miki 417 Mar 2 11:18 application-server_main.tf
-rw-rw-r-- 1 miki miki 164 Mar 2 11:21 application-server_output.tf
-rw-rw-r-- 1 miki miki 398 Mar 2 11:17 application-server_variables.tf
drwxr-xr-x 3 miki miki 4096 Mar 8 10:54 .terraform/
-rw-r--r-- 1 miki miki 1076 Mar 8 10:54 .terraform.lock.hcl
-rw-rw-r-- 1 miki miki 866 Mar 4 14:39 user_data.sh
And application-server_main.tf
resource "aws_instance" "default" {
ami = var.ami-id
iam_instance_profile = var.iam-instance-profile
instance_type = var.instance-type
key_name = var.key-pair
network_interface {
device_index = var.device-index
network_interface_id = var.network-interface-id
}
user_data = templatefile("${path.module}/user_data.sh", {repository_url = var.repository-url})
tags = {
Name = var.name
}
}
My scirpt is not executed. Why? How to structure properly Terraform across many folders?

loop over a list of instances to do yum update failed with exit status 126

I need to automate yum update across a list of instances, I tried something like aws ssm send-command --document-name "AWS-RunShellScript" --parameters 'commands=["sudo yum -y update"]' --targets "Key=instanceids,Values=<target instance id>" --timeout-seconds 600 in my local terminal (MFA enabled, logged in as IAM user, can list all ec2 instance under all regions by aws ec2 describe-instances) got the output with StatusDetails": "Pending" and the update never took place.
I checked the ssm log after starting an ssm session on the target instance
2021-12-08 00:03:32 INFO [ssm-agent-worker] [MessagingDeliveryService] Sending reply {
"additionalInfo": {
"agent": {
"lang": "en-US",
"name": "amazon-ssm-agent",
"os": "",
"osver": "1",
"ver": ""
},
"dateTime": "2021-12-08T00:03:32.061Z",
"runId": "",
"runtimeStatusCounts": {
"Failed": 1
}
},
"documentStatus": "InProgress",
"documentTraceOutput": "",
"runtimeStatus": {
"aws:runShellScript": {
"status": "Failed",
"code": 126,
"name": "aws:runShellScript",
"output": "\n----------ERROR-------\nsh: /var/lib/amazon/ssm/i-074cfdd5be7fe517b/document/orchestration/2d917bcc-fc6e-4e4b-b500-cc2e2b7bd4d6/awsrunShellScript/0.awsrunShellScript/_script.sh: Permission denied\nfailed to run commands: exit status 126",
"startDateTime": "2021-12-08T00:03:32.024Z",
"endDateTime": "2021-12-08T00:03:32.061Z",
"outputS3BucketName": "",
"outputS3KeyPrefix": "",
"stepName": "",
"standardOutput": "",
"standardError": "sh: /var/lib/amazon/ssm/i-074cfdd5be7fe517b/document/orchestration/2d917bcc-fc6e-4e4b-b500-cc2e2b7bd4d6/awsrunShellScript/0.awsrunShellScript/_script.sh: Permission denied\nfailed to run commands: exit status 126"
}
}
}
I checked the directory permission
ls -al /var/lib/amazon/
total 4
drwxr-xr-x 3 root root 17 Jul 26 23:53 .
drwxr-xr-x 32 root root 4096 Aug 6 18:49 ..
drwxr-xr-x 6 root root 80 Aug 7 00:03 ssm
and further one level down
ls -al /var/lib/amazon/ssm
total 0
drwxr-xr-x 6 root root 80 Aug 7 00:03 .
drwxr-xr-x 3 root root 17 Jul 26 23:53 ..
drw------- 2 root root 6 Aug 7 00:03 daemons
drw------- 8 root root 111 Dec 8 00:03 i-074cfdd5be7fe517b
drwxr-x--- 2 root root 39 Aug 7 00:03 ipc
drw------- 3 root root 23 Aug 7 00:03 localcommands
I also tried more basic commands like echo HelloWorld and got the same 126 error.

AWS CodeDeploy script exited with code 127

This is my first time using AWS CodeDeploy and I'm having problems creating my appspec.yml file.
This is the error I'm getting:
2019-02-16 19:28:06 ERROR [codedeploy-agent(3596)]:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Error during perform:
InstanceAgent::Plugins::CodeDeployPlugin::ScriptError -
Script at specified location: deploy_scripts/install_project_dependencies
run as user root failed with exit code 127 -
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:183:in `execute_script'
This is my appspec.yml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/admin_panel_backend
hooks:
BeforeInstall:
- location: deploy_scripts/install_dependencies
timeout: 300
runas: root
- location: deploy_scripts/start_server
timeout: 300
runas: root
AfterInstall:
- location: deploy_scripts/install_project_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: deploy_scripts/stop_server
timeout: 300
runas: root
And this is my project structure
drwxr-xr-x 7 501 20 224 Feb 6 20:57 api
-rw-r--r-- 1 501 20 501 Feb 16 16:29 appspec.yml
-rw-r--r-- 1 501 20 487 Feb 14 21:54 bitbucket-pipelines.yml
-rw-r--r-- 1 501 20 3716 Feb 14 20:43 codedeploy_deploy.py
drwxr-xr-x 4 501 20 128 Feb 6 20:57 config
-rw-r--r-- 1 501 20 1047 Feb 4 22:56 config.yml
drwxr-xr-x 6 501 20 192 Feb 16 16:25 deploy_scripts
drwxr-xr-x 264 501 20 8448 Feb 6 17:40 node_modules
-rw-r--r-- 1 501 20 101215 Feb 6 20:57 package-lock.json
-rw-r--r-- 1 501 20 580 Feb 6 20:57 package.json
-rw-r--r-- 1 501 20 506 Feb 4 08:50 server.js
And deploy_scripts folder
-rwxr--r-- 1 501 20 50 Feb 14 22:54 install_dependencies
-rwxr--r-- 1 501 20 61 Feb 16 16:25 install_project_dependencies
-rwxr--r-- 1 501 20 32 Feb 14 22:44 start_server
-rwxr--r-- 1 501 20 31 Feb 14 22:44 stop_server
This is my install_project_dependencies script
#!/bin/bash
cd /var/www/html/admin_panel_backend
npm install
All the other scripts are working ok, but this one (install_project_dependencies).
Thanks you all
After reading a lot! I realized I was having the same problem as NPM issue deploying a nodejs instance using AWS codedeploy , I didn't have my PATH variable set.
So leaving my start_script as this worked fine!
#!/bin/bash
source /root/.bash_profile
cd /var/www/html/admin_panel_backend
npm install
Thanks!
I had the exact same problem because npm was installed for EC2-user and not for root. I solved it by adding this line to my install_dependencies script.
su - ec2-user -c 'cd /usr/local/nginx/html/node && npm install'
You can replace your npm install line with the line above to install as your user.