Deploying NestJS application on Elastic Beanstalk - amazon-web-services

I am trying to deploy my NestJS application to AWS elastic beanstalk, but did not had any success, can someone please write step by step how can I achieve that?
Full explanation:
I have a nestjs app with typeorm but didn’t configured it to work with RDS, so we will leave it for now (maybe there is a connection, idk).
First of all I made a CodePipline that when I am pushing new version to my github repo it automatically deploying the whole repo to my eb instance that works on node 12.x.
Now, I want that on every git push, the instance will install the dependencies, build the nest app, and start the server from the /dist/main.js.
I have added a Procfile with:
web: npm install && npm run build && npm run start:prod
I have also added PORT environment variable on EB that configured on main.ts and when not found to use 8080.
And my package.json scripts are like a newly created nest app:
"scripts": {
"prebuild": "rimraf dist",
"build": "nest build",
"format": "prettier --write \"src/**/*.ts\" \"test/**/*.ts\"",
"start": "nest start",
"start:dev": "nest start --watch",
"start:debug": "nest start --debug --watch",
"start:prod": "node dist/main",
"lint": "eslint \"{src,apps,libs,test}/**/*.ts\" --fix",
"test": "jest",
"test:watch": "jest --watch",
"test:cov": "jest --coverage",
"test:debug": "node --inspect-brk -r tsconfig-paths/register -r ts-node/register node_modules/.bin/jest --runInBand",
"test:e2e": "jest --config ./test/jest-e2e.json"
}
When deploying I see in the logs that it did something like this:
Jan 23 20:01:14 web: > app#0.0.1 start /var/app/current
Jan 23 20:01:14 ip-172-31-17-171 web: > sudo npm i -g #nestjs/cli && node dist/main
Jan 23 20:01:14 web: We trust you have received the usual lecture from the local System
Jan 23 20:01:14 web: Administrator. It usually boils down to these three things:
Jan 23 20:01:14 web: #1) Respect the privacy of others.
Jan 23 20:01:14 web: #2) Think before you type.
Jan 23 20:01:14 web: #3) With great power comes great responsibility.
Jan 23 20:01:14 web: sudo: no tty present and no askpass program specified
Jan 23 20:01:14 web: npm ERR! code ELIFECYCLE
Jan 23 20:01:14 web: npm ERR! errno 1
Jan 23 20:01:14 web: npm ERR! app#0.0.1 start: `sudo npm i -g #nestjs/cli && node dist/main`
Jan 23 20:01:14 web: npm ERR! Exit status 1
Jan 23 20:01:14 web: npm ERR!
Jan 23 20:01:14 web: npm ERR! Failed at the app#0.0.1 start script.
So I am getting 502 when entering the env url.
So maybe it is a permission issue with npm? Or I need to deploy on some way only the /dist folder?
What do you think the problem is?
It is my first time trying to deploy a backend server to eb :)

I got stuck with the deploying the nestjs app to AWS elastic beanstalk for a while so that I decide make the guide here. I make the deploy guide step by step here for deploying the simple app.
The reason WHY the deploying is usually FAILED because it does not know NEST dependencies (it does not install dev dependencies) and Typescript code which need to compile before uploading to server.
In my case, Use the compression for making the zip file (you can use the command line eb but try to use this first for simple).
Firstly, Preparing the zip file, this is really IMPORTANT for avoiding the FAILED while it is deploying.
You need to make dist folder. For doing this, we remove the dist folder first and run the command nest build (* this one do the tsc for compile Typescript for us).
Now, we have the dist folder, the next thing we need to do is copying the package.json ( * this one installing the DEPENDENCIES for us ) into dist folder to let server know installing the dependencies that we do not bring the node_module here.
In package.json, yarn start to run nest start but the AWS does not know Nest command so that we need to point it out by using node main.js (this file is in dist file, you need to make the path to main.js correctly). For doing this, we create the Procfile (reference this) and add the conntent is web: node src/main.js. Remember copying it into the dist file.
Compressing file correctly as this
Secondly, from AWS console creating the application or environment then uploading the zip file and get successful status. If not, check the log file to know why it failed. Maybe it does not the nest command, etc...
Hoping this guide will help you.
Cheer!

I was stuck with the following EBS eb-engine.log error:
"Instance deployment: You didn't include a 'package.json' file in your
source bundle. The deployment didn't install a specific Node.js
version."
"Instance deployment failed to generate a 'Procfile' for Node.js.
Provide one of these files: 'package.json', 'server.js', or 'app.js'.
The deployment failed."
Using all previous answers, the following works for me with CodePipeline and EBS:
buildspec.yml
version: 0.2
phases:
install:
commands:
- npm install
build:
commands:
- npm run build
post_build:
commands:
- cp -R node_modules/ dist/node_modules # nodejs needs this
- cp Procfile dist/Procfile # EBS needs this
artifacts:
files:
- "**/*"
discard-paths: no
base-directory: dist
Procfile
web: node main.js
The way it works is that CodeBuild handles Nest building, and CodeDeploy only deploys what's in dist/
Node will need node_modules so post build I copy it to dist/
But EBS complains there's no package.json or any file it recognizes in dist/, and since Procfile is usually at root, copy it so EBS finds it and starts by using node main.js

"build": "nest build && cp package.json dist/",
"postbuild": "cd dist && zip ../dist.zip -r * .[^.]*",
"start": "node main.js"
Modifying just these three scripts, it works for me. Procfile is not required.
Finally in the root folder of the project, the dist.zip file that was generated I upload it to Amazon

Change your artifacts to copy everything, so that Codebuild can copy your node_modules, package.json, dist folder automatically.
artifacts:
files:
- "**/*"
Then in your package.json, change the start command to production by default -
"start": "node dist/main"
Here is my buildspec.yml and package.json which works fine on Nestjs deployment on Elastic beanstalk

The mysterious part is this: where is sudo npm i -g #nestjs/cli coming from?
I'm also deploying to Elastic Beanstalk but I only have web: npm run start:prod in my Procfile. The build happens in CI, for that I'm using GitHub Actions. The general outline of the workflow file is as follows:
Checkout
Setup Node
Install dependencies and build app
Generate deployment ZIP archive
Upload deployment archive to Elastic Beanstalk
I suggest you make your Procfile be like mine, run the build script locally, generate the deployment archive using something like zip -r deployment.zip dist package* Procfile and upload it to Elastic Beanstalk and see what the outcome is.

Related

Docker with Serverless- files not getting packaged to container

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

Integrating SonarQube within AWS CodePipeline: Connection Refused

tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.
I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !

AWS Codedeploy No such file or directory

I have two problems deploying via AWS CodeDeploy.
I'm trying to deploy CodeCommit's code to an EC2 ubuntu instance.
At appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
ApplicationStart:
- location: scripts/ApplicationStart.sh
timeout: 300
runas: ubuntu
There are several config files that I need to place at the right place in the application before starting pm2. I also assume since I set runas in appspec.yml as ubuntu, the bash script will at /home/ubuntu.
The my /home/ubuntu has
config/ backend/ frontend/
Looks like Code deploy won't overwrite the previous deployment so if I have backend/ and frontend/ folder at the directory, it will fail at Install stage.
In the ApplicationStart.sh
#!bin/bash
sudo cp config/config1.json backend/config/config1.json
sudo cp config/config2.json backend/config/environments/development/config2.json
sudo cp config/config3.json frontend/config3.json
sudo pm2 kill
cd backend
sudo npm install
sudo pm2 start "strapi start" --name backend
cd ../frontend
sudo npm install
sudo pm2 start "npm start" --name frontend
While the ApplicationStart stage, it gives me the following error.
LifecycleEvent - ApplicationStart
Script - scripts/ApplicationStart.sh
[stderr]bash: /opt/codedeploy-agent/path/to/deployment/scripts/ApplicationStart.sh: bin/bash:
bad interpreter: No such file or directory
I run the same bash file at the /home/ubuntu. It works fine.
Question 1.
- how to run BeforeInstall.sh without the error? Is there the path problems or something else I try to do but I am not supposed to do?
Question 2.
- How can I let code deploy to overwrite the previous deployment when there are already application folders in the directory (/home/ubuntu)?
- Do I manually delete the directory at BeforeInstall stage?
You're missing a slash before bin/bash in #!bin/bash.
It should be #!/bin/bash.

How to pass artifact build from gitlab ci to dockerfile?

I need a way to pass a job artifact from gitlab ci to a Dockerfile, so I can copy it into a directory. What is the path where this artifact is located?
Thank you!
Steps:
use artifacts at your stage.
use dependencies to pass dependencies to the current stage.
And you can see files in Dockerfile.
For example, I run the vueJS project, the main flow:
stage1: build. Run npm run build:prod in vueJS project
build-dist:
stage: build-dist
image: node
script:
- npm run build:prod
artifacts:
paths:
- dist/
stage2: use dependencies
build-docker:
stage: build-docker
image: docker:stable
script:
- docker build
dependencies:
- build-dist
stage3: copy dist to Dockerfile
FROM fholzer/nginx-brotli
COPY ./dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/nginx.conf
You should use dependencies, the docs also state that job artifacts are passed to the next job by default.
The artifacts from the previous jobs will be downloaded and extracted in the context of the build.
You can use RUN --mount=type=secret
Build images with BuildKit
There is an example to show how to copy credentials into dockerfile
This dockerfile
# syntax = docker/dockerfile:experimental
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
cat /root/.aws/credentials
This the CI
$ docker build -t test --secret id=aws,src=$HOME/.aws/credentials .

appspec.yml failed to call scripts

I am trying to setup CI using AWS CodeDeploy and CircleCI. Right now I am stuck at the step where AWS CodeDeploy should copy stuff into EC2 and run scripts. But somehow CircleCI tells me something is wrong. Does anyone know what might be happening? Thanks.
the appspec.yml is:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
BeforeInstall:
- location: scripts/setup.sh
timeout: 3800
runas: root
ApplicationStart:
- location: scripts/start.sh
timeout: 3800
runas: root
and setup.sh is:
#!/bin/bash
sudo apt-get install nodejs npm
npm install
in the above code I also tried only apt-get install nodejs npm but it's still nor working.
the error message in /var/log/aws/codedeploy-agent/codedeploy-agent.log is as follows:
2015-10-22 08:02:54 ERROR [codedeploy-agent(1314)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Error during
perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location:
./scripts/setup.sh run as user root failed with exit code 127 - /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:150:in `execute_script'
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:107:in `block (2 levels) in execute'
......
Exit code 127 generally means that the OS couldn't find something required to execute the command. In this case it could be either the script wasn't at the expected path or /bin/bash doesn't exist (unlikely).
Check that the archive being produced by your build process is actually putting your scripts in the archive where your appspec expects them. scripts/setup.sh needs to be in that exact path within your archive.
You can also look at what the agent actually got by checking the deployment archive for your deployment: /opt/codedeploy-agent/deployment-root/deployment-group-id/deployment-id/deployment-archive to make sure the archive is being extracted correctly.
Do the following steps for the debugging:
in the CodeDeploy error log /var/log/aws/codedeploy-agent/codedeploy-agent.log there is a line that says Error during perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location: scripts/setup.sh failed with exit code 1. So from the error log I know the problems might be from this script.
In the above mentioned script setup.sh, put something like this at the beginning of the script:
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/home/ubuntu/out.log 2>&1
This logs the entire error outputs for you.
Permission issues
It's also possible that EC2 failed to execute those scripts, you need to make sure those files have at least 755 permissions when copied to your instance. So you need to specify 755 file mode for your scripts.
How to change the File Mode on GitHub?
Also in appspec.yml you need can specify a runas directive. Could be ubuntu or root or whatever that gives you the correct permission.
miscellaneous
Some pitfalls on deploying like when you do sudo apt-get install nodejs there will be intermediate steps that ask if you want to install packages and used disk spaces and you have to type Y or N to proceed installation. those scripts would hang there and timeout resulting in failed deployment. So instead you do
sudo apt-get -y install nodejs npm
Or in your setup.sh script maybe you have
chmod -R 777 public
but it's possible CodeDeploy is executing this code in a folder that's different than your project root. So make sure all the paths are valid.