I downloaded cookiecutter django to start a new project the other day. I spun it up (along with postgres, redis, etc) inside docker containers. The configuration files should be fine because they were all generated by coockicutter.
However, once I build and turn on the containers I am unable to see the "hello world" splash page when I connect to my localhost:8000. But there is something going wrong between the applications and the containers because I am able to connect to them via telnet and through docker exec -it commands etc. The only thing I can think of is some sort of permissions issue? So I gave all the files/directors 777 permissions to test that but that hasnt changed anything.
logs
% docker compose -f local.yml up
[+] Running 8/0
⠿ Container dashboard_local_docs Created 0.0s
⠿ Container dashboard_local_redis Created 0.0s
⠿ Container dashboard_local_mailhog Created 0.0s
⠿ Container dashboard_local_postgres Created 0.0s
⠿ Container dashboard_local_django Created 0.0s
⠿ Container dashboard_local_celeryworker Created 0.0s
⠿ Container dashboard_local_celerybeat Created 0.0s
⠿ Container dashboard_local_flower Created 0.0s
Attaching to dashboard_local_celerybeat, dashboard_local_celeryworker, dashboard_local_django, dashboard_local_docs, dashboard_local_flower, dashboard_local_mailhog, dashboard_local_postgres, dashboard_local_redis
dashboard_local_postgres |
dashboard_local_postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization
dashboard_local_postgres |
dashboard_local_postgres | 2022-07-07 14:36:15.969 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.992 UTC [1] LOG: listening on IPv6 address "::", port 5432
dashboard_local_postgres | 2022-07-07 14:36:15.995 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
dashboard_local_postgres | 2022-07-07 14:36:15.999 UTC [26] LOG: database system was shut down at 2022-07-07 14:35:47 UTC
dashboard_local_postgres | 2022-07-07 14:36:16.004 UTC [1] LOG: database system is ready to accept connections
dashboard_local_mailhog | 2022/07/07 14:36:16 Using in-memory storage
dashboard_local_mailhog | 2022/07/07 14:36:16 [SMTP] Binding to address: 0.0.0.0:1025
dashboard_local_mailhog | 2022/07/07 14:36:16 Serving under http://0.0.0.0:8025/
dashboard_local_mailhog | [HTTP] Binding to address: 0.0.0.0:8025
dashboard_local_mailhog | Creating API v1 with WebPath:
dashboard_local_mailhog | Creating API v2 with WebPath:
dashboard_local_docs | sphinx-autobuild -b html --host 0.0.0.0 --port 9000 --watch /app -c . . ./_build/html
dashboard_local_docs | [sphinx-autobuild] > sphinx-build -b html -c . /docs /docs/_build/html
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Redis version=6.2.7, bits=64, commit=00000000, modified=0, pid=1, just started
dashboard_local_redis | 1:C 07 Jul 2022 14:36:17.057 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * monotonic clock: POSIX clock_gettime
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # A key '__redis__compare_helper' was added to Lua globals which is not on the globals allow list nor listed on the deny list.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 * Running mode=standalone, port=6379.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # Server initialized
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.057 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Loading RDB produced by version 6.2.7
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB age 30 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * RDB memory usage when created 0.78 Mb
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 # Done loading RDB, keys loaded: 3, keys expired: 0.
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * DB loaded from disk: 0.000 seconds
dashboard_local_redis | 1:M 07 Jul 2022 14:36:17.058 * Ready to accept connections
dashboard_local_docs | Running Sphinx v5.0.1
dashboard_local_celeryworker | PostgreSQL is available
dashboard_local_celerybeat | PostgreSQL is available
dashboard_local_docs | loading pickled environment... done
dashboard_local_docs | building [mo]: targets for 0 po files that are out of date
dashboard_local_docs | building [html]: targets for 0 source files that are out of date
dashboard_local_docs | updating environment: 0 added, 0 changed, 0 removed
dashboard_local_docs | looking for now-outdated files... none found
dashboard_local_docs | no targets are out of date.
dashboard_local_docs | build succeeded.
dashboard_local_docs |
dashboard_local_docs | The HTML pages are in _build/html.
dashboard_local_docs | [I 220707 14:36:18 server:335] Serving on http://0.0.0.0:9000
dashboard_local_celeryworker | [14:36:18] watching "/app" and reloading "celery.__main__.main" on changes...
dashboard_local_docs | [I 220707 14:36:18 handlers:62] Start watching changes
dashboard_local_docs | [I 220707 14:36:18 handlers:64] Start detecting changes
dashboard_local_django | PostgreSQL is available
dashboard_local_celerybeat | celery beat v5.2.7 (dawn-chorus) is starting.
dashboard_local_flower | PostgreSQL is available
dashboard_local_celerybeat | __ - ... __ - _
dashboard_local_celerybeat | LocalTime -> 2022-07-07 09:36:19
dashboard_local_celerybeat | Configuration ->
dashboard_local_celerybeat | . broker -> redis://redis:6379/0
dashboard_local_celerybeat | . loader -> celery.loaders.app.AppLoader
dashboard_local_celerybeat | . scheduler -> django_celery_beat.schedulers.DatabaseScheduler
dashboard_local_celerybeat |
dashboard_local_celerybeat | . logfile -> [stderr]#%INFO
dashboard_local_celerybeat | . maxinterval -> 5.00 seconds (5s)
dashboard_local_celerybeat | [2022-07-07 09:36:19,658: INFO/MainProcess] beat: Starting...
dashboard_local_celeryworker | /usr/local/lib/python3.9/site-packages/celery/platforms.py:840: SecurityWarning: You're running the worker with superuser privileges: this is
dashboard_local_celeryworker | absolutely not recommended!
dashboard_local_celeryworker |
dashboard_local_celeryworker | Please specify a different user using the --uid option.
dashboard_local_celeryworker |
dashboard_local_celeryworker | User information: uid=0 euid=0 gid=0 egid=0
dashboard_local_celeryworker |
dashboard_local_celeryworker | warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
dashboard_local_celeryworker |
dashboard_local_celeryworker | -------------- celery#e1ac9f770cbd v5.2.7 (dawn-chorus)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -- ******* ---- Linux-5.4.0-96-generic-x86_64-with-glibc2.31 2022-07-07 09:36:19
dashboard_local_celeryworker | - *** --- * ---
dashboard_local_celeryworker | - ** ---------- [config]
dashboard_local_celeryworker | - ** ---------- .> app: dashboard:0x7fd9dcaeb1c0
dashboard_local_celeryworker | - ** ---------- .> transport: redis://redis:6379/0
dashboard_local_celeryworker | - ** ---------- .> results: redis://redis:6379/0
dashboard_local_celeryworker | - *** --- * --- .> concurrency: 8 (prefork)
dashboard_local_celeryworker | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
dashboard_local_celeryworker | --- ***** -----
dashboard_local_celeryworker | -------------- [queues]
dashboard_local_celeryworker | .> celery exchange=celery(direct) key=celery
dashboard_local_celeryworker |
dashboard_local_celeryworker |
dashboard_local_celeryworker | [tasks]
dashboard_local_celeryworker | . dashboard.users.tasks.get_users_count
dashboard_local_celeryworker |
dashboard_local_django | Operations to perform:
dashboard_local_django | Apply all migrations: account, admin, auth, authtoken, contenttypes, django_celery_beat, sessions, sites, socialaccount, users
dashboard_local_django | Running migrations:
dashboard_local_django | No migrations to apply.
dashboard_local_flower | INFO 2022-07-07 09:36:20,646 command 7 140098896897856 Visit me at http://localhost:5555
dashboard_local_flower | INFO 2022-07-07 09:36:20,652 command 7 140098896897856 Broker: redis://redis:6379/0
dashboard_local_flower | INFO 2022-07-07 09:36:20,655 command 7 140098896897856 Registered tasks:
dashboard_local_flower | ['celery.accumulate',
dashboard_local_flower | 'celery.backend_cleanup',
dashboard_local_flower | 'celery.chain',
dashboard_local_flower | 'celery.chord',
dashboard_local_flower | 'celery.chord_unlock',
dashboard_local_flower | 'celery.chunks',
dashboard_local_flower | 'celery.group',
dashboard_local_flower | 'celery.map',
dashboard_local_flower | 'celery.starmap',
dashboard_local_flower | 'dashboard.users.tasks.get_users_count']
dashboard_local_flower | INFO 2022-07-07 09:36:20,663 mixins 7 140098817644288 Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,792: INFO/SpawnProcess-1] Connected to redis://redis:6379/0
dashboard_local_celeryworker | [2022-07-07 09:36:20,794: INFO/SpawnProcess-1] mingle: searching for neighbors
dashboard_local_flower | WARNING 2022-07-07 09:36:21,700 inspector 7 140098800826112 Inspect method active_queues failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,710 inspector 7 140098766993152 Inspect method reserved failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,712 inspector 7 140098784040704 Inspect method scheduled failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,714 inspector 7 140098758600448 Inspect method revoked failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098792433408 Inspect method registered failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098276423424 Inspect method conf failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,715 inspector 7 140098809218816 Inspect method stats failed
dashboard_local_flower | WARNING 2022-07-07 09:36:21,716 inspector 7 140098775648000 Inspect method active failed
dashboard_local_celeryworker | [2022-07-07 09:36:21,802: INFO/SpawnProcess-1] mingle: all alone
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: WARNING/SpawnProcess-1] /usr/local/lib/python3.9/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
dashboard_local_celeryworker | leak, never use this setting in production environments!
dashboard_local_celeryworker | warnings.warn('''Using settings.DEBUG leads to a memory
dashboard_local_celeryworker |
dashboard_local_celeryworker | [2022-07-07 09:36:21,811: INFO/SpawnProcess-1] celery#e1ac9f770cbd ready.
dashboard_local_django | Watching for file changes with StatReloader
dashboard_local_django | INFO 2022-07-07 09:36:22,862 autoreload 9 140631340287808 Watching for file changes with StatReloader
dashboard_local_django | Performing system checks...
dashboard_local_django |
dashboard_local_django | System check identified no issues (0 silenced).
dashboard_local_django | July 07, 2022 - 09:36:23
dashboard_local_django | Django version 3.2.14, using settings 'config.settings.local'
dashboard_local_django | Starting development server at http://0.0.0.0:8000/
dashboard_local_django | Quit the server with CONTROL-C.
dashboard_local_celeryworker | [2022-07-07 09:36:25,661: INFO/SpawnProcess-1] Events of group {task} enabled by remote.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69591187e44d dashboard_local_flower "/entrypoint /start-…" 11 minutes ago Up 2 minutes 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp dashboard_local_flower
15914b6b91e0 dashboard_local_celerybeat "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celerybeat
e1ac9f770cbd dashboard_local_celeryworker "/entrypoint /start-…" 11 minutes ago Up 2 minutes dashboard_local_celeryworker
6bbfc900c346 dashboard_local_django "/entrypoint /start" 11 minutes ago Up 2 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp dashboard_local_django
b8bec3422bae redis:6 "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 6379/tcp dashboard_local_redis
2b7c3d9eabe3 dashboard_production_postgres "docker-entrypoint.s…" 11 minutes ago Up 2 minutes 5432/tcp dashboard_local_postgres
0249aaaa040c mailhog/mailhog:v1.0.0 "MailHog" 11 minutes ago Up 2 minutes 1025/tcp, 0.0.0.0:8025->8025/tcp, :::8025->8025/tcp dashboard_local_mailhog
d5dd94cbb070 dashboard_local_docs "/start-docs" 11 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp dashboard_local_docs
the ports are listening
telnet 127.0.0.1 8000
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^]
% sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 127.0.0.1:43979 0.0.0.0:* LISTEN 31867/BlastServer
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 963/rpcbind
tcp 0 0 0.0.0.0:46641 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:51857 0.0.0.0:* LISTEN 4149/rpc.statd
tcp 0 0 0.0.0.0:5555 0.0.0.0:* LISTEN 14326/docker-proxy
tcp 0 0 0.0.0.0:6100 0.0.0.0:* LISTEN 31908/Xorg
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 973/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 29295/sshd
tcp 0 0 0.0.0.0:8025 0.0.0.0:* LISTEN 13769/docker-proxy
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 30117/master
tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 882/sshd: noakes#no
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 29532/smbd
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 14272/docker-proxy
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN 13850/docker-proxy
tcp6 0 0 :::139 :::* LISTEN 29532/smbd
tcp6 0 0 :::40717 :::* LISTEN -
tcp6 0 0 :::41423 :::* LISTEN 4149/rpc.statd
tcp6 0 0 :::111 :::* LISTEN 963/rpcbind
tcp6 0 0 127.0.0.1:41265 :::* LISTEN 30056/java
tcp6 0 0 :::5555 :::* LISTEN 14333/docker-proxy
tcp6 0 0 :::6100 :::* LISTEN 31908/Xorg
tcp6 0 0 :::22 :::* LISTEN 29295/sshd
tcp6 0 0 :::13782 :::* LISTEN 2201/xinetd
tcp6 0 0 :::13783 :::* LISTEN 2201/xinetd
tcp6 0 0 :::8025 :::* LISTEN 13779/docker-proxy
tcp6 0 0 ::1:25 :::* LISTEN 30117/master
tcp6 0 0 ::1:6010 :::* LISTEN 882/sshd: noakes#no
tcp6 0 0 :::13722 :::* LISTEN 2201/xinetd
tcp6 0 0 :::6556 :::* LISTEN 2201/xinetd
tcp6 0 0 :::445 :::* LISTEN 29532/smbd
tcp6 0 0 :::8000 :::* LISTEN 14278/docker-proxy
tcp6 0 0 :::1057 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7778 :::* LISTEN 2201/xinetd
tcp6 0 0 :::7779 :::* LISTEN 2201/xinetd
tcp6 0 0 :::9000 :::* LISTEN 13860/docker-proxy
local.yml
version: '3'
volumes:
dashboard_local_postgres_data: {}
dashboard_local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
#user: "root:root"
image: dashboard_local_django
container_name: dashboard_local_django
platform: linux/x86_64
depends_on:
- postgres
- redis
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: dashboard_production_postgres
container_name: dashboard_local_postgres
volumes:
- dashboard_local_postgres_data:/var/lib/postgresql/data:Z
- dashboard_local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: dashboard_local_docs
container_name: dashboard_local_docs
platform: linux/x86_64
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./dashboard:/app/dashboard:z
ports:
- "9000:9000"
command: /start-docs
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: dashboard_local_mailhog
ports:
- "8025:8025"
redis:
image: redis:6
container_name: dashboard_local_redis
celeryworker:
<<: *django
image: dashboard_local_celeryworker
container_name: dashboard_local_celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: dashboard_local_celerybeat
container_name: dashboard_local_celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: dashboard_local_flower
container_name: dashboard_local_flower
ports:
- "5555:5555"
command: /start-flower
I'm doing something I thought was simple:
# Fetch config
- name: 'gcr.io/cloud-builders/gsutil'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'cp', 'gs://servicesconfig/devs/react-app/env.server', '/persistent_volume/env.server' ]
# Install dependencies
- name: node:$_NODE_VERSION
entrypoint: 'yarn'
args: [ 'install' ]
# Build project
- name: node:$_NODE_VERSION
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- -c
- |
cp /persistent_volume/env.server .env.production &&
cat .env.production &&
ls -la &&
yarn run build:prod
while in my package.json:
"build:prod": "sh -ac '. .env.production; react-scripts build'",
All of this works well in local but the output in gcp cloud build:
Already have image: node:14
REACT_APP_ENV="sandbox"
REACT_APP_CAPTCHA_ENABLED=true
REACT_APP_CAPTCHA_PUBLIC_KEY="akey"
REACT_APP_DEFAULT_APP="home-btn"
REACT_APP_API_URL="akey2"
REACT_APP_STRIPE_KEY="akey3"
REACT_APP_COGNITO_POOL_ID="akey4"
REACT_APP_COGNITO_APP_ID="akey5"
total 2100
drwxr-xr-x 6 root root 4096 Feb 25 12:15 .
drwxr-xr-x 1 root root 4096 Feb 25 12:15 ..
-rw-r--r-- 1 root root 382 Feb 25 12:15 .env.production <- it's here!
drwxr-xr-x 8 root root 4096 Feb 25 12:13 .git
-rw-r--r-- 1 root root 230 Feb 25 12:13 .gitignore
-rw-r--r-- 1 root root 371 Feb 25 12:13 Dockerfile
-rw-r--r-- 1 root root 3787 Feb 25 12:13 README.md
-rw-r--r-- 1 root root 1019 Feb 25 12:13 cloudbuild.yaml
drwxr-xr-x 1089 root root 36864 Feb 25 12:14 node_modules
-rw-r--r-- 1 root root 1580131 Feb 25 12:13 package-lock.json
-rw-r--r-- 1 root root 1896 Feb 25 12:13 package.json
drwxr-xr-x 2 root root 4096 Feb 25 12:13 public
drwxr-xr-x 9 root root 4096 Feb 25 12:13 src
-rw-r--r-- 1 root root 535 Feb 25 12:13 tsconfig.json
-rw-r--r-- 1 root root 478836 Feb 25 12:13 yarn.lock
/workspace
yarn run v1.22.17
$ sh -ac '. .env.production; react-scripts build'
sh: 1: .: .env.production: not found
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I'm unsure if I'm doing something completely wrong or if it's a bug on GCP side?
Alright, I'm not expert enough into bash and zh documentation to understand what the issue is, but I ended up solving it.
One thing to pay attention to:
everything is actually shared between raw steps in cloudbuild, no need for a volume or any specific path
So on the cloudbuild side I changed the yaml to reflect:
- name: node:$_NODE_VERSION
entrypoint: 'bash'
args:
- -c
- |
mv env.server .env.production &&
yarn run build:prod
And on the package.json I'm now using an extra lib env-cmd
which changes the build command to:
"build:prod": "env-cmd -f .env.production react-scripts build",
this works like a charm.
I'm a bit annoyed I had to add another lib for this but, well.
This is how my folder structure looks like
total 248
drwxrwxr-x 6 miki miki 4096 Mar 7 16:01 ./
drwxrwxr-x 5 miki miki 4096 Mar 3 14:53 ../
-rw-rw-r-- 1 miki miki 460 Mar 4 11:59 application_01.tf
drwxrwxr-x 3 miki miki 4096 Mar 8 10:54 application-server/
-rw-rw-r-- 1 miki miki 862 Mar 4 09:06 ecr.tf
-rw-rw-r-- 1 miki miki 3169 Mar 4 11:36 iam.tf
-rw-rw-r-- 1 miki miki 1023 Mar 4 14:11 jenkins_01.tf
drwxrwxr-x 2 miki miki 4096 Mar 7 15:33 jenkins-config/
-rw------- 1 miki miki 3401 Mar 3 09:41 jenkins.key
-r-------- 1 miki miki 753 Mar 3 09:41 jenkins.pem
drwxrwxr-x 3 miki miki 4096 Mar 8 10:53 jenkins-server/
I run yesterday both terraform init and terraform apply
I found out that my application-server folder content is not implemented.
I have script file(update ,install docker,login to ECR, and pull image)
sudo yum update -y
sudo amazon-linux-extras install docker
sudo systemctl start docker
sudo systemctl enable docker
/bin/sh -e -c 'echo $(aws ecr get-login-password --region us-east-1) | docker login -u AWS --password-stdin ${repository_url}'
sudo docker pull ${repository_url}:release
sudo docker run -p 80:8000 ${repository_url}:release
Anyway I checked the instance from the console
I run
terraform plan
and this it says
No changes. Your infrastructure matches the configuration.
Your configuration already matches the changes detected above. If you'd like to update the Terraform state to match, create and apply a refresh-only plan:
terraform apply -refresh-only
My application.tf file
module "application-server" {
source = "./application-server"
ami-id = "ami-0742b4e673072066f" # AMI for an Amazon Linux instance for region: us-east-1
iam-instance-profile = aws_iam_instance_profile.simple-web-app.id
key-pair = aws_key_pair.simple-web-app-key.key_name
name = "Simple Web App"
device-index = 0
network-interface-id = aws_network_interface.simple-web-app.id
repository-url = aws_ecr_repository.simple-web-app.repository_url
}
And APPLICATION_SERVER folder
-rw-rw-r-- 1 miki miki 417 Mar 2 11:18 application-server_main.tf
-rw-rw-r-- 1 miki miki 164 Mar 2 11:21 application-server_output.tf
-rw-rw-r-- 1 miki miki 398 Mar 2 11:17 application-server_variables.tf
drwxr-xr-x 3 miki miki 4096 Mar 8 10:54 .terraform/
-rw-r--r-- 1 miki miki 1076 Mar 8 10:54 .terraform.lock.hcl
-rw-rw-r-- 1 miki miki 866 Mar 4 14:39 user_data.sh
And application-server_main.tf
resource "aws_instance" "default" {
ami = var.ami-id
iam_instance_profile = var.iam-instance-profile
instance_type = var.instance-type
key_name = var.key-pair
network_interface {
device_index = var.device-index
network_interface_id = var.network-interface-id
}
user_data = templatefile("${path.module}/user_data.sh", {repository_url = var.repository-url})
tags = {
Name = var.name
}
}
My scirpt is not executed. Why? How to structure properly Terraform across many folders?
I need to automate yum update across a list of instances, I tried something like aws ssm send-command --document-name "AWS-RunShellScript" --parameters 'commands=["sudo yum -y update"]' --targets "Key=instanceids,Values=<target instance id>" --timeout-seconds 600 in my local terminal (MFA enabled, logged in as IAM user, can list all ec2 instance under all regions by aws ec2 describe-instances) got the output with StatusDetails": "Pending" and the update never took place.
I checked the ssm log after starting an ssm session on the target instance
2021-12-08 00:03:32 INFO [ssm-agent-worker] [MessagingDeliveryService] Sending reply {
"additionalInfo": {
"agent": {
"lang": "en-US",
"name": "amazon-ssm-agent",
"os": "",
"osver": "1",
"ver": ""
},
"dateTime": "2021-12-08T00:03:32.061Z",
"runId": "",
"runtimeStatusCounts": {
"Failed": 1
}
},
"documentStatus": "InProgress",
"documentTraceOutput": "",
"runtimeStatus": {
"aws:runShellScript": {
"status": "Failed",
"code": 126,
"name": "aws:runShellScript",
"output": "\n----------ERROR-------\nsh: /var/lib/amazon/ssm/i-074cfdd5be7fe517b/document/orchestration/2d917bcc-fc6e-4e4b-b500-cc2e2b7bd4d6/awsrunShellScript/0.awsrunShellScript/_script.sh: Permission denied\nfailed to run commands: exit status 126",
"startDateTime": "2021-12-08T00:03:32.024Z",
"endDateTime": "2021-12-08T00:03:32.061Z",
"outputS3BucketName": "",
"outputS3KeyPrefix": "",
"stepName": "",
"standardOutput": "",
"standardError": "sh: /var/lib/amazon/ssm/i-074cfdd5be7fe517b/document/orchestration/2d917bcc-fc6e-4e4b-b500-cc2e2b7bd4d6/awsrunShellScript/0.awsrunShellScript/_script.sh: Permission denied\nfailed to run commands: exit status 126"
}
}
}
I checked the directory permission
ls -al /var/lib/amazon/
total 4
drwxr-xr-x 3 root root 17 Jul 26 23:53 .
drwxr-xr-x 32 root root 4096 Aug 6 18:49 ..
drwxr-xr-x 6 root root 80 Aug 7 00:03 ssm
and further one level down
ls -al /var/lib/amazon/ssm
total 0
drwxr-xr-x 6 root root 80 Aug 7 00:03 .
drwxr-xr-x 3 root root 17 Jul 26 23:53 ..
drw------- 2 root root 6 Aug 7 00:03 daemons
drw------- 8 root root 111 Dec 8 00:03 i-074cfdd5be7fe517b
drwxr-x--- 2 root root 39 Aug 7 00:03 ipc
drw------- 3 root root 23 Aug 7 00:03 localcommands
I also tried more basic commands like echo HelloWorld and got the same 126 error.
This is my first time using AWS CodeDeploy and I'm having problems creating my appspec.yml file.
This is the error I'm getting:
2019-02-16 19:28:06 ERROR [codedeploy-agent(3596)]:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Error during perform:
InstanceAgent::Plugins::CodeDeployPlugin::ScriptError -
Script at specified location: deploy_scripts/install_project_dependencies
run as user root failed with exit code 127 -
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:183:in `execute_script'
This is my appspec.yml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/admin_panel_backend
hooks:
BeforeInstall:
- location: deploy_scripts/install_dependencies
timeout: 300
runas: root
- location: deploy_scripts/start_server
timeout: 300
runas: root
AfterInstall:
- location: deploy_scripts/install_project_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: deploy_scripts/stop_server
timeout: 300
runas: root
And this is my project structure
drwxr-xr-x 7 501 20 224 Feb 6 20:57 api
-rw-r--r-- 1 501 20 501 Feb 16 16:29 appspec.yml
-rw-r--r-- 1 501 20 487 Feb 14 21:54 bitbucket-pipelines.yml
-rw-r--r-- 1 501 20 3716 Feb 14 20:43 codedeploy_deploy.py
drwxr-xr-x 4 501 20 128 Feb 6 20:57 config
-rw-r--r-- 1 501 20 1047 Feb 4 22:56 config.yml
drwxr-xr-x 6 501 20 192 Feb 16 16:25 deploy_scripts
drwxr-xr-x 264 501 20 8448 Feb 6 17:40 node_modules
-rw-r--r-- 1 501 20 101215 Feb 6 20:57 package-lock.json
-rw-r--r-- 1 501 20 580 Feb 6 20:57 package.json
-rw-r--r-- 1 501 20 506 Feb 4 08:50 server.js
And deploy_scripts folder
-rwxr--r-- 1 501 20 50 Feb 14 22:54 install_dependencies
-rwxr--r-- 1 501 20 61 Feb 16 16:25 install_project_dependencies
-rwxr--r-- 1 501 20 32 Feb 14 22:44 start_server
-rwxr--r-- 1 501 20 31 Feb 14 22:44 stop_server
This is my install_project_dependencies script
#!/bin/bash
cd /var/www/html/admin_panel_backend
npm install
All the other scripts are working ok, but this one (install_project_dependencies).
Thanks you all
After reading a lot! I realized I was having the same problem as NPM issue deploying a nodejs instance using AWS codedeploy , I didn't have my PATH variable set.
So leaving my start_script as this worked fine!
#!/bin/bash
source /root/.bash_profile
cd /var/www/html/admin_panel_backend
npm install
Thanks!
I had the exact same problem because npm was installed for EC2-user and not for root. I solved it by adding this line to my install_dependencies script.
su - ec2-user -c 'cd /usr/local/nginx/html/node && npm install'
You can replace your npm install line with the line above to install as your user.