Deploying Dockerized NextJs Application to AWS Elastic Beanstalk Throws 502 Bad Gateway - amazon-web-services
I am able to build and run my nextjs application locally with docker-compose, everything works fine. But when I deploy to Elastic Beanstalk, I get successful deployment but when I go to the url of the environment I get 502 Bad Gateway.
I've tried many different things, multiple different templates I have found online for the Dockerfile and docker-compose.yml. I've been able to run locally in many different ways but none of them work on elastic beanstalk.
If anyone has any insight I would appreciate it. Also I know the 100 lines of logs is a bit much down below but I thought any guidance might help for a faster resolution.
Important files below, and also here is the full repo -> https://github.com/mphbo/logan-thomas-production
Here is my Dockerfile:
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# COPY package.json yarn.lock ./
# RUN yarn install --frozen-lockfile
# If using npm with a `package-lock.json` comment out above and use below instead
# AWS seems to have issues with old package-lock files, I have not included so that it is created on build
COPY package.json .
RUN npm i
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=deps /app/package-lock.json ./package-lock.json
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
# RUN yarn build
# If using npm comment out above and use below instead
RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
And here is my docker-compose.yml
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
Also here are some of the last 100 lines of the logs from my most recent build. Had to remove most for size requirements from stack overflow.
----------------------------------------
/var/log/nginx/error.log
----------------------------------------
----------------------------------------
/var/log/eb-docker/containers/eb-current-app/eb-stdouterr.log
----------------------------------------
Attaching to current_web_1
web_1 | info - Loaded env from /app/.env
web_1 | Listening on port 3000
----------------------------------------
/var/log/docker-events.log
----------------------------------------
2022-03-17T03:20:54.704808186Z container destroy 9dfce132adf4411cb21998b4b680a8bd6cd460d85eb5df91886b244d3f4db330 (image=sha256:e3b21b7beaf5e68b05f3baca37d7b4fbcb114ba6d21f3bc890e45cd7384b8ba7, name=quizzical_leavitt)
2022-03-17T03:20:54.721955363Z container create ba23c18a0a8d82d2c1415b17f7ddf3c789ceac4dc4b145fa407d1897441a8822 (image=sha256:4d7510d46ddf6b2f523595a76270d9b99a00751e21b69560592fea60de9c5ec8, name=modest_morse)
2022-03-17T03:20:54.811489936Z container destroy ba23c18a0a8d82d2c1415b17f7ddf3c789ceac4dc4b145fa407d1897441a8822 (image=sha256:4d7510d46ddf6b2f523595a76270d9b99a00751e21b69560592fea60de9c5ec8, name=modest_morse)
2022-03-17T03:20:54.818358483Z image tag sha256:cdc27002fa166b55f0e0f17a258e8064ed8563b92d9d541d3483f3e4c5d7a525 (name=staging_web:latest)
2022-03-17T03:20:59.655258896Z container kill 5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54 (com.docker.compose.config-hash=bb2863a542c107f09564fc88d62b338c563b5de443e0dc4e7a788ffcd8d342ad, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=current, com.docker.compose.project.config_files=docker-compose.yml, com.docker.compose.project.working_dir=/var/app/current, com.docker.compose.service=web, com.docker.compose.version=1.29.2, image=current_web, name=current_web_1, signal=15)
2022-03-17T03:20:59.675186211Z container die 5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54 (com.docker.compose.config-hash=bb2863a542c107f09564fc88d62b338c563b5de443e0dc4e7a788ffcd8d342ad, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=current, com.docker.compose.project.config_files=docker-compose.yml, com.docker.compose.project.working_dir=/var/app/current, com.docker.compose.service=web, com.docker.compose.version=1.29.2, exitCode=0, image=current_web, name=current_web_1)
2022-03-17T03:20:59.755111992Z network disconnect c00f61c330f9ce4f2664214e5e99f8420b224e0cfe8c24ef90d30eef70653510 (container=5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54, name=current_default, type=bridge)
2022-03-17T03:20:59.764733369Z container stop 5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54 (com.docker.compose.config-hash=bb2863a542c107f09564fc88d62b338c563b5de443e0dc4e7a788ffcd8d342ad, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=current, com.docker.compose.project.config_files=docker-compose.yml, com.docker.compose.project.working_dir=/var/app/current, com.docker.compose.service=web, com.docker.compose.version=1.29.2, image=current_web, name=current_web_1)
2022-03-17T03:20:59.788410889Z container destroy 5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54 (com.docker.compose.config-hash=bb2863a542c107f09564fc88d62b338c563b5de443e0dc4e7a788ffcd8d342ad, com.docker.compose.container-number=1, com.docker.compose.oneoff=False, com.docker.compose.project=current, com.docker.compose.project.config_files=docker-compose.yml, com.docker.compose.project.working_dir=/var/app/current, com.docker.compose.service=web, com.docker.compose.version=1.29.2, image=current_web, name=current_web_1)
2022-03-17T03:20:59.836566662Z network destroy c00f61c330f9ce4f2664214e5e99f8420b224e0cfe8c24ef90d30eef70653510 (name=current_default,
----------------------------------------
/var/log/eb-docker-process.log
----------------------------------------
2022/03/16 00:56:29.964855 [INFO] Loading Manifest...
2022/03/16 00:56:29.964950 [INFO] no eb envtier info file found, skip loading env tier info.
2022/03/16 00:56:29.981151 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:080740215952:stack/awseb-e-xp7qsktpsk-stack/733824c0-a4c3-11ec-8cd3-0ebb713b3873 -r AWSEBAutoScalingGroup --region us-east-1
2022/03/16 00:56:30.757661 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:080740215952:stack/awseb-e-xp7qsktpsk-stack/733824c0-a4c3-11ec-8cd3-0ebb713b3873 -r AWSEBBeanstalkMetadata --region us-east-1
2022/03/16 00:56:31.224117 [INFO] Checking if docker is running...
2022/03/16 00:56:31.224134 [INFO] Fetch current app container id...
2022/03/16 00:56:31.224156 [INFO] Running command /bin/sh -c docker ps | grep 175915dc5ee7
2022/03/16 00:56:31.264646 [INFO] 175915dc5ee7 2517f92be235 "python /tmp/applica…" 8 seconds ago Up 7 seconds 8000/tcp focused_hypatia
2022/03/16 00:56:31.264686 [INFO] Running command /bin/sh -c docker wait 175915dc5ee7
----------------------------------------
/var/log/docker-compose-events.log
----------------------------------------
2022-03-16 04:55:49.722213 container create ccf18ec9402847bc1f635abbc26f3effae8d6d4aac609fd885fbcc72b024637a (image=current_web, name=current_web_1)
2022-03-16 04:55:50.459344 container start ccf18ec9402847bc1f635abbc26f3effae8d6d4aac609fd885fbcc72b024637a (image=current_web, name=current_web_1)
2022-03-16 05:28:02.520420 container create d40272c0074c0b6e1ab4859c402f73cd4d3d1ce75c8d7373eddf68f74726bfa4 (image=current_web, name=current_web_1)
2022-03-16 05:28:03.240080 container start d40272c0074c0b6e1ab4859c402f73cd4d3d1ce75c8d7373eddf68f74726bfa4 (image=current_web, name=current_web_1)
2022-03-16 05:43:21.269955 container create 6427b013979c0da82e636ca1022887fe7bde12977ed921f296d71f03d1b18e3d (image=current_web, maintainer=NGINX Docker Maintainers <docker-maint#nginx.com>, name=current_web_1)
2022-03-16 05:43:21.909654 container start 6427b013979c0da82e636ca1022887fe7bde12977ed921f296d71f03d1b18e3d (image=current_web, maintainer=NGINX Docker Maintainers <docker-maint#nginx.com>, name=current_web_1)
2022-03-16 05:43:21.943149 container die 6427b013979c0da82e636ca1022887fe7bde12977ed921f296d71f03d1b18e3d (exitCode=127, image=current_web, maintainer=NGINX Docker Maintainers <docker-maint#nginx.com>, name=current_web_1)
2022-03-16 05:43:23.650946 container destroy 6427b013979c0da82e636ca1022887fe7bde12977ed921f296d71f03d1b18e3d (image=current_web, maintainer=NGINX Docker Maintainers <docker-maint#nginx.com>, name=current_web_1)
2022-03-17 02:05:43.050073 container create b8fbe93e240c9023b7d4351cd1b65fb4ead54206cf28fb9e7cc58b89f2457619 (image=current_web, maintainer=NGINX Docker Maintainers <docker-maint#nginx.com>, name=current_web_1)
2022-03-17 02:05:43.703614 container start b8fbe93e240c9023b7d4351cd1b65fb4ead54206cf28fb9e7cc58b89f2457619 (image=current_web, maintainer=NGINX Docker Maintainers <docker-maint#nginx.com>, name=current_web_1)
2022-03-17 03:15:25.417627 container create 5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54 (image=current_web, name=current_web_1)
2022-03-17 03:15:26.022821 container start 5ebaa6c044e5484217d3c86c526d16ec52060590eb7b0751dbc8ada2ffce7f54 (image=current_web, name=current_web_1)
2022-03-17 03:21:01.472006 container create 9a99c07fd6c6629ff92da05bc71692075a1ac0bf7a79d25e613eacf5cb9553c7 (image=current_web, name=current_web_1)
2022-03-17 03:21:02.081385 container start 9a99c07fd6c6629ff92da05bc71692075a1ac0bf7a79d25e613eacf5cb9553c7 (image=current_web, name=current_web_1)
2022-03-17 18:03:09.231168 container create 466ad0e8612ca55ba04307739873bff518d989d60ee02ba47fbfa868586cf7a3 (image=current_web, name=current_web_1)
2022-03-17 18:03:09.812741 container start 466ad0e8612ca55ba04307739873bff518d989d60ee02ba47fbfa868586cf7a3 (image=current_web, name=current_web_1)
----------------------------------------
/var/log/docker
----------------------------------------
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.393689697Z" level=info msg="Starting up"
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.409736448Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.409762244Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.409794007Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.409808870Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.431080302Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.431104184Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.431124156Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.431135443Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 16 00:54:30 ip-172-31-4-118 docker: time="2022-03-16T00:54:30.625879192Z" level=info msg="Loading containers: start."
Mar 16 00:54:31 ip-172-31-4-118 docker: time="2022-03-16T00:54:31.162175982Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 16 00:54:31 ip-172-31-4-118 docker: time="2022-03-16T00:54:31.291890951Z" level=info msg="Loading containers: done."
Mar 16 00:54:31 ip-172-31-4-118 docker: time="2022-03-16T00:54:31.578800386Z" level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
Mar 16 00:54:31 ip-172-31-4-118 docker: time="2022-03-16T00:54:31.579749223Z" level=info msg="Daemon has completed initialization"
Mar 16 00:54:31 ip-172-31-4-118 docker: time="2022-03-16T00:54:31.605571872Z" level=info msg="API listen on /run/docker.sock"
Mar 16 00:56:21 ip-172-31-4-118 docker: time="2022-03-16T00:56:21.102008961Z" level=info msg="Layer sha256:764fa8ef33a160d2a5c2bd412220c846a8a586bf966e03d6ea04f01c5524c219 cleaned up"
Mar 16 00:56:21 ip-172-31-4-118 docker: time="2022-03-16T00:56:21.220730612Z" level=info msg="Layer sha256:764fa8ef33a160d2a5c2bd412220c846a8a586bf966e03d6ea04f01c5524c219 cleaned up"
Mar 16 02:01:57 ip-172-31-4-118 docker: time="2022-03-16T02:01:57.345559855Z" level=info msg="ignoring event" container=f37efcfa2861a94bd4589834f691cd1ac44db54513a7edc6291922310cb85e28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 16 02:02:38 ip-172-31-4-118 docker: time="2022-03-16T02:02:38.733711684Z" level=info msg="ignoring event" container=ad83de473f3447c452d68d482fe0828b521cd7029dd4952d061d5b706ffb644a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 16 02:02:45 ip-172-31-4-118 docker: time="2022-03-16T02:02:45.150779851Z" level=info msg="Layer sha256:aad4e7071a27ac48a27bf0e78a7e1c06cf76968554d091929cc735a0a005c50e cleaned up"
Mar 16 02:10:26 ip-172-31-4-118 docker: time="2022-03-16T02:10:26.395144645Z" level=info msg="ignoring event" container=d78d3b1e4bb25e89fc217ec17a39dbe32afbf5b092782c48387e20e74eb5bff4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 16 02:11:06 ip-172-31-4-118 docker: time="2022-03-16T02:11:06.695819171Z" level=info msg="ignoring event" container=e714a6196094ad30b9bacbc56a154cadd171573ecf2ccfc003935a6bb3e173bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 16 02:11:12 ip-172-31-4-118 docker: time="2022-03-16T02:11:12.381263816Z" level=info msg="Layer sha256:aad4e7071a27ac48a27bf0e78a7e1c06cf76968554d091929cc735a0a005c50e cleaned up"
Mar 16 04:46:26 ip-172-31-4-118 docker: time="2022-03-16T04:46:26.034993236Z" level=info msg="ignoring event"
----------------------------------------
/var/log/eb-engine.log
----------------------------------------
2022/03/17 18:03:08.417529 [INFO] Running command /bin/sh -c docker-compose up -d
2022/03/17 18:03:09.961410 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker-compose-log.service
2022/03/17 18:03:09.970952 [INFO] Running command /bin/sh -c systemctl daemon-reload
2022/03/17 18:03:10.082890 [INFO] Running command /bin/sh -c systemctl reset-failed
2022/03/17 18:03:10.093958 [INFO] Running command /bin/sh -c systemctl enable eb-docker-compose-log.service
2022/03/17 18:03:10.214523 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker-compose-log.service
2022/03/17 18:03:10.224477 [INFO] Running command /bin/sh -c systemctl is-active eb-docker-compose-log.service
2022/03/17 18:03:10.234898 [INFO] Running command /bin/sh -c systemctl start eb-docker-compose-log.service
2022/03/17 18:03:10.327790 [INFO] Running command /bin/sh -c docker-compose ps -q
2022/03/17 18:03:11.528797 [INFO] 466ad0e8612ca55ba04307739873bff518d989d60ee02ba47fbfa868586cf7a3
2022/03/17 18:03:11.529329 [INFO] Executing instruction: Clean up Docker
2022/03/17 18:03:11.529347 [INFO] Running command /bin/sh -c docker ps -aq
2022/03/17 18:03:11.569500 [INFO] 466ad0e8612c
2022/03/17 18:03:11.569536 [INFO] Running command /bin/sh -c docker images | sed 1d
2022/03/17 18:03:11.616081 [INFO] current_web latest 619583f1d6ce 9 seconds ago 114MB
<none> <none> e571fa5f409b 10 seconds ago 360MB
<none> <none> 329affc64c2a About a minute ago 454MB
node 16-alpine 0e1547c0f4a4 5 weeks ago 110MB
node lts-alpine 0e1547c0f4a4 5 weeks ago 110MB
2022/03/17 18:03:11.616128 [INFO] save docker tag command: docker tag 0e1547c0f4a4 node:16-alpine
2022/03/17 18:03:11.616133 [INFO] save docker tag command: docker tag 0e1547c0f4a4 node:lts-alpine
2022/03/17 18:03:11.616144 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
2022/03/17 18:03:11.693698 [INFO] Running command /bin/sh -c docker rmi `docker images -aq`
2022/03/17 18:03:13.198494 [INFO] Deleted: sha256:e571fa5f409bbdedd55795e0c90a17e12635069c6c6eaec33c8e0b0e46f1b57f
Deleted: sha256:f4379e9f6e5670d36350d042c308bd559add51d613914cea76c1f7ba0a69281d
2022/03/17 18:03:13.198511 [INFO] restore docker image name with command: docker tag 619583f1d6ce current_web:latest
2022/03/17 18:03:13.198534 [INFO] Running command /bin/sh -c docker tag 619583f1d6ce current_web:latest
2022/03/17 18:03:13.258536 [INFO] restore docker image name with command: docker tag e571fa5f409b <none>:<none>
2022/03/17 18:03:13.258573 [INFO] Running command /bin/sh -c docker tag e571fa5f409b <none>:<none>
2022/03/17 18:03:13.260182 [INFO] restore docker image name with command: docker tag 329affc64c2a <none>:<none>
2022/03/17 18:03:13.260193 [INFO] Running command /bin/sh -c docker tag 329affc64c2a <none>:<none>
2022/03/17 18:03:13.261564 [INFO] restore docker image name with command: docker tag 0e1547c0f4a4 node:16-alpine
2022/03/17 18:03:13.261615 [INFO] Running command /bin/sh -c docker tag 0e1547c0f4a4 node:16-alpine
2022/03/17 18:03:13.307632 [INFO] restore docker image name with command: docker tag 0e1547c0f4a4 node:lts-alpine
2022/03/17 18:03:13.307671 [INFO] Running command /bin/sh -c docker tag 0e1547c0f4a4 node:lts-alpine
2022/03/17 18:03:13.357160 [INFO] Executing instruction: start X-Ray
2022/03/17 18:03:13.357177 [INFO] X-Ray is not enabled.
2022/03/17 18:03:13.357181 [INFO] Executing instruction: configureSqsd
2022/03/17 18:03:13.357201 [INFO] This is a web server environment instance, skip configure sqsd daemon ...
2022/03/17 18:03:13.357206 [INFO] Executing instruction: startSqsd
2022/03/17 18:03:13.357214 [INFO] This is a web server environment instance, skip start sqsd daemon ...
2022/03/17 18:03:13.357219 [INFO] Executing instruction: Track pids in healthd
2022/03/17 18:03:13.357223 [INFO] This is an enhanced health env...
2022/03/17 18:03:13.357237 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf aws-eb.target | cut -d= -f2
2022/03/17 18:03:13.363354 [INFO] eb-docker-compose-events.service docker.service eb-docker-compose-log.service eb-docker-events.service cfn-hup.service healthd.service
2022/03/17 18:03:13.363374 [INFO] Running command /bin/sh -c systemctl show -p ConsistsOf eb-app.target | cut -d= -f2
2022/03/17 18:03:13.369062 [INFO]
2022/03/17 18:03:13.369625 [INFO] Executing instruction: Configure Docker Container Logging
2022/03/17 18:03:13.372466 [INFO] Executing instruction: RunAppDeployPostDeployHooks
2022/03/17 18:03:13.372481 [INFO] Executing platform hooks in .platform/hooks/postdeploy/
2022/03/17 18:03:13.372495 [INFO] The dir .platform/hooks/postdeploy/ does not exist
2022/03/17 18:03:13.372501 [INFO] Executing cleanup logic
2022/03/17 18:03:13.373123 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[{"msg":"Instance deployment completed successfully.","timestamp":1647540193,"severity":"INFO"}]}]}
2022/03/17 18:03:13.373272 [INFO] Platform Engine finished execution on command: app-deploy
2022/03/17 18:03:43.354453 [INFO] Starting...
2022/03/17 18:03:43.354502 [INFO] Starting EBPlatform-PlatformEngine
2022/03/17 18:03:43.354542 [INFO] reading event message file
2022/03/17 18:03:43.356863 [INFO] no eb envtier info file found, skip loading env tier info.
2022/03/17 18:03:43.356974 [INFO] Engine received EB command cfn-hup-exec
2022/03/17 18:03:43.480863 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:080740215952:stack/awseb-e-xp7qsktpsk-stack/733824c0-a4c3-11ec-8cd3-0ebb713b3873 -r AWSEBAutoScalingGroup --region us-east-1
2022/03/17 18:03:43.811316 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-get-metadata -s arn:aws:cloudformation:us-east-1:080740215952:stack/awseb-e-xp7qsktpsk-stack/733824c0-a4c3-11ec-8cd3-0ebb713b3873 -r AWSEBBeanstalkMetadata --region us-east-1
2022/03/17 18:03:44.136679 [INFO] checking whether command tail-log is applicable to this instance...
2022/03/17 18:03:44.136692 [INFO] this command is applicable to the instance, thus instance should execute command
2022/03/17 18:03:44.136697 [INFO] Engine command: (tail-log)
2022/03/17 18:03:44.137649 [INFO] Executing instruction: GetTailLogs
2022/03/17 18:03:44.137659 [INFO] Tail Logs...
2022/03/17 18:03:44.139172 [INFO] Running command /bin/sh -c tail -n 100 /var/log/nginx/error.log
2022/03/17 18:03:44.142190 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-docker/containers/eb-current-app/eb-stdouterr.log
2022/03/17 18:03:44.143761 [INFO] Running command /bin/sh -c tail -n 100 /var/log/docker-events.log
2022/03/17 18:03:44.146376 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-docker-process.log
2022/03/17 18:03:44.148615 [INFO] Running command /bin/sh -c tail -n 100 /var/log/docker-compose-events.log
2022/03/17 18:03:44.150308 [INFO] Running command /bin/sh -c tail -n 100 /var/log/docker
2022/03/17 18:03:44.152670 [INFO] Running command /bin/sh -c tail -n 100 /var/log/eb-engine.log
----------------------------------------
/var/log/eb-hooks.log
----------------------------------------
----------------------------------------
/var/log/nginx/access.log
----------------------------------------
172.31.11.179 - - [16/Mar/2022:04:43:23 +0000] "GET / HTTP/1.1" 200 1438 "-" "ELB-HealthChecker/2.0" "-"
I finally found the solution. After hours of debugging, changing the docker file and docker-compose, and searching online for an answer a coworker explained to me that port 80 needs to be exposed... not port 3000 as I have in the docker-compose file.
Related
Aws deploy springboot project with error 502
I am deploying the web app in aws elastic bean .The code run fine in localhost but fail with error 502. Here are my log in eb with warn: 2022/11/12 18:05:10.175659 [INFO] Cleaned ebextensions subdirectories from app staging directory. 2022/11/12 18:05:10.175664 [INFO] Executing instruction: RunAppDeployPreDeployHooks 2022/11/12 18:05:10.175694 [INFO] Running command /bin/sh -c uname -m 2022/11/12 18:05:10.177374 [INFO] x86_64 2022/11/12 18:05:10.177397 [INFO] Executing platform hooks in .platform/hooks/predeploy/ 2022/11/12 18:05:10.177417 [INFO] The dir .platform/hooks/predeploy/ does not exist 2022/11/12 18:05:10.177421 [INFO] Finished running scripts in /var/app/staging/.platform/hooks/predeploy 2022/11/12 18:05:10.177428 [INFO] Executing instruction: stop X-Ray 2022/11/12 18:05:10.177433 [INFO] stop X-Ray ... 2022/11/12 18:05:10.177442 [INFO] Running command /bin/sh -c systemctl show -p PartOf xray.service 2022/11/12 18:05:10.315088 [WARN] stopProcess Warning: process xray is not registered 2022/11/12 18:05:10.315115 [INFO] Running command /bin/sh -c systemctl stop xray.service 2022/11/12 18:05:10.323680 [INFO] Executing instruction: stop proxy 2022/11/12 18:05:10.323699 [INFO] Running command /bin/sh -c systemctl show -p PartOf httpd.service 2022/11/12 18:05:10.327933 [WARN] deregisterProcess Warning: process httpd is not registered, skipping... 2022/11/12 18:05:10.327948 [INFO] Running command /bin/sh -c systemctl show -p PartOf nginx.service 2022/11/12 18:05:10.334136 [WARN] deregisterProcess Warning: process nginx is not registered, skipping... 2022/11/12 18:05:10.334149 [INFO] Executing instruction: FlipApplication 2022/11/12 18:05:10.334154 [INFO] Fetching environment variables... 2022/11/12 18:05:10.334172 [INFO] Running command /bin/sh -c uname -m 2022/11/12 18:05:10.335672 [INFO] x86_64 2022/11/12 18:05:10.335768 [INFO] Purge old process... 2022/11/12 18:05:10.335786 [INFO] Removing /var/app/current/ if it exists 2022/11/12 18:05:10.335796 [INFO] Renaming /var/app/staging/ to /var/app/current/ 2022/11/12 18:05:10.335811 [INFO] Register application processes... 2022/11/12 18:05:10.335818 [INFO] Registering the proc: web Here are the log with error: 2022/11/12 18:18:19 [error] 3971#3971: *1 connect() failed (111: Connection refused) while connecting to upstream,
Running Docker as a Service - Environment Variables
I am attempting to run my docker container in my linux server and configure it as a systemd unit to manage itself. My /etc/systemd/system/system.service file features this line: [Unit] Description=Your Container Name After=docker.service Requires=docker.service StartLimitInterval=200 StartLimitBurst=10 [Service] TimeoutStartSec=0 Restart=always RestartSec=2 ExecStartPre=-/usr/bin/docker exec %n stop ExecStartPre=-/usr/bin/docker rm %n ExecStartPre=/usr/bin/bash -c 'docker login -u AWS -p $(aws ecr get-login-password --region eu-west-1) 0123456789.dkr.ecr.eu-west-1.amazonaws.com' ExecStartPre=/usr/bin/docker pull 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest ExecStart=/usr/bin/docker run --rm --name %n 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest -e env_var1=abc -e env_var2=def [Install] WantedBy=multi-user.target This has proven problematic because when I restart the service and check the status it shows this error: ● docker.name.service - name Loaded: loaded (/etc/systemd/system/docker.name.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Thu 2022-03-10 19:28:06 UTC; 6min ago Process: 11029 ExecStart=/usr/bin/docker run --rm --name %n 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest -e env_var1=abc -e env_var2=def (code=exited, status=127) Process: 11018 ExecStartPre=/usr/bin/docker pull 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest (code=exited, status=0/SUCCESS) Process: 10984 ExecStartPre=/usr/bin/bash -c docker login -u AWS -p $(aws ecr get-login-password --region eu-west-1) 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest (code=exited, status=0/SUCCESS) Process: 10973 ExecStartPre=/usr/bin/docker rm %n (code=exited, status=1/FAILURE) Process: 10951 ExecStartPre=/usr/bin/docker exec %n stop (code=exited, status=1/FAILURE) Main PID: 11029 (code=exited, status=127) Process: 8174 ExecStart=/usr/bin/docker run --rm --name %n 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest -e env_var1=abc -e env_var2=def (code=exited, status=127) Removing the docker -e options env_var1=abc -e env_var2=def and restarting the service then allows the service to start correctly. How do I get these environment variables to be passed to the docker container from the service? It is critical they are.
docker run considers everything after the image name to be a command that's passed to the container, overriding whatever is configured in the Dockerfile with CMD. To provide environment variables to the container itself, your -e options need to appear before the image name: ExecStart=/usr/bin/docker run --rm --name %n -e env_var1=abc -e env_var2=def 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest
AWS Beanstalk "eb-docker" is not registered
I have a Docker-backed AWS Elastic Beanstalk application. I pull the image from a private AWS ECR. However, I get the "Instance deployment failed..." error. In the logs, I see 2021/12/08 21:57:49.261047 [INFO] Executing instruction: RestartAppServer 2021/12/08 21:57:49.261050 [INFO] Restarting customer application... 2021/12/08 21:57:49.261065 [INFO] detected current app is not docker compose app 2021/12/08 21:57:49.261073 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker.service 2021/12/08 21:57:49.265842 [WARN] stopProcess Warning: process eb-docker is not registered > 2021/12/08 21:57:49.265859 [INFO] Running command /bin/sh -c systemctl stop eb-docker.service 2021/12/08 21:57:49.270477 [INFO] Running command /bin/sh -c systemctl show -p PartOf eb-docker.service 2021/12/08 21:57:49.275369 [ERROR] An error occurred during execution of command [restart-app-server] - [RestartAppServer]. Stop running the command. Error: startProcess Failure: process "eb-docker" is not registered What do I do about this? How to register "eb-docker"? My Dockerrun.aws.json is quite simple { "AWSEBDockerrunVersion": "1", "Image": { "Name": "**.dkr.ecr.eu-central-1.amazonaws.com/*:*", "Update": "true" }, "Ports": [ { "ContainerPort": "80" } ] }
nginx fails while the website somehow still works
Description I have built my Django3, gunicorn, nginx, Ubuntu 18.04, Digital Ocean project based on this guide. I only had 1 problem that it does not shows the CSS and all other static files like images. Before during the whole guide nginx have given the correct answers as the guide says and also currently the html site still online and running To solve this I was in the process of using this another guide to make my static files displayed on my site. I have done all the steps what the creator recommended but at the What I have tried After each step of the following [1.2.3...] commands I have executed the following commands to refresh: python3 manage.py collectstatic sudo nginx -t && sudo systemctl restart nginx sudo systemctl restart gunicorn 1.RUN: sudo ln -s /etc/nginx/sites-available/ch-project /etc/nginx/sites-enabled 1.RESULT: ln: failed to create symbolic link '/etc/nginx/sites-enabled/ch-project': File exists 2.RUN: /etc/nginx/sites-enabled/my-project 2.RESULT: -bash: /etc/nginx/sites-enabled/my-project: Permission denied 3.RUN: systemctl status nginx.service 3.RESULT: ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2020-03-26 13:27:08 UTC; 13s ago Docs: man:nginx(8) Process: 11111 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS) Process: 11111 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE) Main PID: 11111 (code=exited, status=0/SUCCESS) 4.RUN: sudo nginx -t 4.RESULT: nginx: [emerg] open() "/etc/nginx/sites-enabled/myproject" failed (2: No such file or directory) in /etc/nginx/nginx.conf:62 nginx: configuration file /etc/nginx/nginx.conf test failed Nginex should be Ok otherwise because the html on the website loads and works perfectly. This stack overflow post says I should maybe do something with the security of the nginx.conf but in that case they talk about a worldpres site so I don't know how to implement that here. I have tried this stack overflow post's answer previously, bellow the answer it has a subpost to further configure RUN: sudo nginx -c /etc/nginx/sites-enabled/default -t 6.RESULT: nginx: [emerg] "server" directive is not allowed here in /etc/nginx/sites-enabled/default:21 nginx: configuration file /etc/nginx/sites-enabled/default test failed
The static files still don't load but nginx has been fixed with the following commands. deleting accidentally created myproject file from /etc/nginx/sites-enabled/myproject what file name is in the official guide but the_actual_myproject has different name RUN: cd /etc/nginx/sites-enabled sudo rm myproject RUN: namei -l /run/gunicorn.sock sudo systemctl restart gunicorn sudo systemctl daemon-reload sudo systemctl restart gunicorn.socket gunicorn.service sudo nginx -t && sudo systemctl restart nginx RESULT: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful RUN: systemctl status nginx.service RESULT: ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-03-26 16:32:09 UTC; 7min ago Docs: man:nginx(8) Process: 11111 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, sta Process: 11111 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=ex Main PID: 11111 (nginx) Tasks: 2 (limit: 1152) CGroup: /system.slice/nginx.service
Heroku App Crashed After pushing and releasing by gitlab-ci
I have deployed a Django application on Heroku server. I have pushed my project without migrations and database manually by Heroku CLI. Then, pushing by Gitlab ci (which does the migrations before push) gives me app crashed error on Heroku side. I have encountered many "app crashed" errors before and could solve them by inspecting the logs. But this time I cannot understand what is the issue. I am sure that files are completely pushed and the application is released. Here is my Procfile: web: gunicorn --pythonpath Code Code.wsgi --log-file - My project is in the "Code" folder and my Django project name is Code. Error part of the heroku logs: 2019-06-08T08:02:50.549319+00:00 app[api]: Deployed web (c1f5c903bedb) by user arminbehnamnia#gmail.com 2019-06-08T08:02:50.549319+00:00 app[api]: Release v6 created by user arminbehnamnia#gmail.com 2019-06-08T08:02:51.268875+00:00 heroku[web.1]: Restarting 2019-06-08T08:02:51.277247+00:00 heroku[web.1]: State changed from up to starting 2019-06-08T08:02:52.494158+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2019-06-08T08:02:52.517991+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [4] [INFO] Handling signal: term 2019-06-08T08:02:52.519983+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [11] [INFO] Worker exiting (pid: 11) 2019-06-08T08:02:52.529529+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [10] [INFO] Worker exiting (pid: 10) 2019-06-08T08:02:52.823141+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [4] [INFO] Shutting down: Master 2019-06-08T08:02:52.958760+00:00 heroku[web.1]: Process exited with status 0 2019-06-08T08:03:09.777009+00:00 heroku[web.1]: Starting process with command `python3` 2019-06-08T08:03:11.647048+00:00 heroku[web.1]: State changed from starting to crashed 2019-06-08T08:03:11.654524+00:00 heroku[web.1]: State changed from crashed to starting 2019-06-08T08:03:11.625687+00:00 heroku[web.1]: Process exited with status 0 . . . 2019-06-08T08:17:16.898569+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=makan-system.herokuapp.com request_id=de4fbb8e-cd14-4263-bb6f-a8d0f956a519 fwd="69.55.54.121" dyno= connect= service= status=503 bytes= protocol=https My .gitlab-ci.yml file: stages: - test - build - push tests: image: docker:latest services: - docker:dind stage: test before_script: - docker login -u armin_gm -p $PASSWORD registry.gitlab.com script: - docker build . -t test_django - docker ps - docker run --name=testDjango test_django python /makanapp/Code/manage.py test registration when: on_success build: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u armin_gm -p $PASSWORD registry.gitlab.com script: - docker build -t registry.gitlab.com/armin_gm/asd_project_98_6 . - docker push registry.gitlab.com/armin_gm/asd_project_98_6 push_to_heroku: image: docker:latest stage: push services: - docker:dind script: # This is for gitlab - docker login -u armin_gm -p $PASSWORD registry.gitlab.com #- docker pull registry.gitlab.com/armin_gm/asd_project_98_6:latest - docker build . -t push_to_django - docker ps # This is for heroku - docker login --username=arminbehnamnia#gmail.com --password=$AUTH_TOKEN registry.heroku.com - docker tag push_to_django:latest registry.heroku.com/makan-system/web:latest - docker push registry.heroku.com/makan-system/web:latest - docker run --rm -e HEROKU_API_KEY=$AUTH_TOKEN wingrunr21/alpine-heroku-cli container:release web --app makan-system My Dockerfile: # Official Python image FROM python:latest ENV PYTHONUNBUFFERED 1 # create root directory for project, set the working directory and move all files RUN mkdir /makanapp WORKDIR /makanapp ADD . /makanapp/ # Web server will listen to this port EXPOSE 8000 # Install all libraries we saved to requirements.txt file #RUN apt-get -y update #RUN apt-get -y install python3-dev python3-setuptools RUN pip install -r requirements.txt RUN python ./Code/manage.py makemigrations RUN python ./Code/manage.py migrate --run-syncdb