appspec.yml failed to call scripts - amazon-web-services

I am trying to setup CI using AWS CodeDeploy and CircleCI. Right now I am stuck at the step where AWS CodeDeploy should copy stuff into EC2 and run scripts. But somehow CircleCI tells me something is wrong. Does anyone know what might be happening? Thanks.
the appspec.yml is:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
BeforeInstall:
- location: scripts/setup.sh
timeout: 3800
runas: root
ApplicationStart:
- location: scripts/start.sh
timeout: 3800
runas: root
and setup.sh is:
#!/bin/bash
sudo apt-get install nodejs npm
npm install
in the above code I also tried only apt-get install nodejs npm but it's still nor working.
the error message in /var/log/aws/codedeploy-agent/codedeploy-agent.log is as follows:
2015-10-22 08:02:54 ERROR [codedeploy-agent(1314)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Error during
perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location:
./scripts/setup.sh run as user root failed with exit code 127 - /opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:150:in `execute_script'
/opt/codedeploy-agent/lib/instance_agent/plugins/codedeploy/hook_executor.rb:107:in `block (2 levels) in execute'
......

Exit code 127 generally means that the OS couldn't find something required to execute the command. In this case it could be either the script wasn't at the expected path or /bin/bash doesn't exist (unlikely).
Check that the archive being produced by your build process is actually putting your scripts in the archive where your appspec expects them. scripts/setup.sh needs to be in that exact path within your archive.
You can also look at what the agent actually got by checking the deployment archive for your deployment: /opt/codedeploy-agent/deployment-root/deployment-group-id/deployment-id/deployment-archive to make sure the archive is being extracted correctly.

Do the following steps for the debugging:
in the CodeDeploy error log /var/log/aws/codedeploy-agent/codedeploy-agent.log there is a line that says Error during perform: InstanceAgent::Plugins::CodeDeployPlugin::ScriptError - Script at specified location: scripts/setup.sh failed with exit code 1. So from the error log I know the problems might be from this script.
In the above mentioned script setup.sh, put something like this at the beginning of the script:
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/home/ubuntu/out.log 2>&1
This logs the entire error outputs for you.
Permission issues
It's also possible that EC2 failed to execute those scripts, you need to make sure those files have at least 755 permissions when copied to your instance. So you need to specify 755 file mode for your scripts.
How to change the File Mode on GitHub?
Also in appspec.yml you need can specify a runas directive. Could be ubuntu or root or whatever that gives you the correct permission.
miscellaneous
Some pitfalls on deploying like when you do sudo apt-get install nodejs there will be intermediate steps that ask if you want to install packages and used disk spaces and you have to type Y or N to proceed installation. those scripts would hang there and timeout resulting in failed deployment. So instead you do
sudo apt-get -y install nodejs npm
Or in your setup.sh script maybe you have
chmod -R 777 public
but it's possible CodeDeploy is executing this code in a folder that's different than your project root. So make sure all the paths are valid.

Related

How to codedeploy appspec.yml runas ubuntu user

AWS CodeDeploy is used for a simple WordPress application. Installed AWS codedeploy-agent on ubuntu 20.04 with help of the below script
#!/bin/bash
apt update
apt install ruby -y
gem install bundler
git clone https://github.com/aws/aws-codedeploy-agent.git /opt/codedeploy-agent
sudo chown -R root.root /opt/codedeploy-agent
sudo chmod 644 /opt/codedeploy-agent/conf/codedeployagent.yml
sudo chmod 755 /opt/codedeploy-agent/init.d/codedeploy-agent
sudo chmod 644 /opt/codedeploy-agent/init.d/codedeploy-agent.service
cd /opt/codedeploy-agent
bundle install --system
rake clean && rake
cp /opt/codedeploy-agent/init.d/codedeploy-agent /etc/init.d/
systemctl daemon-reload
systemctl start codedeploy-agent
systemctl enable codedeploy-agent
Using the below appspec.yml for code deployment. Its working fine with runas root
Questions :
How to run it as an ubuntu user, ?
Is any issue with while running as root user ?
....
appspec.yaml file
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/
overwrite: true
hooks:
BeforeInstall:
- location: scripts/before_install.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/setup_environment.sh
timeout: 300
runas: root
- location: scripts/after_install.sh
timeout: 900
runas: root
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 300
ValidateService:
- location: scripts/validate_service.sh
timeout: 300
While runas ubuntu getting the below error.
Error code
ScriptFailed
Script name
scripts/setup_environment.sh
Message
Script at specified location: scripts/setup_environment.sh run as user ubuntu failed with exit code 4
LifecycleEvent - AfterInstall
Script - scripts/setup_environment.sh
[stderr]shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
[stderr]shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
[stderr]/opt/codedeploy-agent/deployment-root/44d6390b-485e-87ef-b50855bbf251/d-D0RTN7AR5/deployment-archive/scripts/setup_environment.sh: line 4: /var/www/html/.env: Permission denied
[stderr]sed: couldn't open temporary file /var/www/html/scripts/seTwGZAv: Permission denied
If you run it as ubuntu user it will not work due to lack of permissions which you are experiencing:
couldn't open temporary file /var/www/html/scripts/seTwGZAv: Permission denied
The reason is that /var/www/html/ is not accessible by ubuntu user. To make it work you would have to change its default permissions which is a bad practice.
Some things have to be executed as root, unless you want to start changing default configurations and permission model of ubuntu operating system.
As appspec.yml file and scripts are managed by you, there is not any security issue while running our script as root. What you'll write is what you'll get.
While using any non root user it is important to provide all the required permissions to that user. In most of the cases you will have to use sudo before each command and make sure your user is added to sudoers.
You need to make sure that
Your git is secure from any unauthorized changes.
CodeDeploy is only accessible to the trusted resources.
If these two things are checked, there's no way any anomalous command can run on your system

Running gcloud run deploy from inside Cloud Build results in error

I have a custom build step in Google Cloud Build, which first builds a docker image and then deploys it as a cloud run service.
This last step fails, with the following log output;
Step #2: Deploying... Step #2: Setting IAM Policy.........done Step
2: Creating Revision............................................................................................................................failed
Step #2: Deployment failed Step #2: ERROR: (gcloud.run.deploy) Cloud
Run error: Invalid argument error. Invalid ENTRYPOINT. [name:
"gcr.io/opencobalt/silo#sha256:fb860e758eb1957b90ff3761fcdf68dedb9d10f832f2bb21375915d3de2aaed5"
Step #2: error: "Invalid command \"/bin/sh\": file not found" Step #2:
]. Finished Step #2 ERROR ERROR: build step 2
"gcr.io/cloud-builders/gcloud" failed: step exited with non-zero
status: 1
The build steps look like this;
["run","deploy","silo","--image","gcr.io/opencobalt/silo","--region","us-central1","--platform","managed","--allow-unauthenticated"]}
The image is built an exists in the registry, and if I change the last build step to deploy a compute engine VM instead, it works. Those build steps looks like this;
{"name":"gcr.io/cloud-builders/gcloud","args":["compute","instances",
"create-with-container","silo","--container-image","gcr.io/opencobalt/silo","--zone","us-central1-a","--tags","silo,pharo"]}
I can also build the image locally but run into the same error when running gcloud run deploy locally.
I am trying to figure out how to solve this problem. The image works, since it runs fine locally and runs fine when deployed as a Compute Engine VM, the error only show up when I'm trying to deploy the image as a Cloud Run service.
(added) The Dockerfile looks like this;
######################################
# Based on Ubuntu image
######################################
FROM ubuntu
######################################
# Basic project infos
######################################
LABEL maintainer="PeterSvensson"
######################################
# Update Ubuntu apt and install some tools
######################################
RUN apt-get update \
&& apt-get install -y wget \
&& apt-get install -y git \
&& apt-get install -y unzip \
&& rm -rf /var/lib/apt/lists/*
######################################
# Have an own directory for the tool
######################################
RUN mkdir webapp
WORKDIR webapp
######################################
# Download Pharo using Zeroconf & start script
######################################
RUN wget -O- https://get.pharo.org/64/80+vm | bash
COPY service_account.json service_account.json
RUN export certificate="$(cat service_account.json)"
COPY load.st load.st
COPY setup.sh setup.sh
RUN chmod +x setup.sh
RUN ./setup.sh; echo 0
RUN ./pharo Pharo.image load.st; echo 0
######################################
# Expose port 8080 of Zinc outside the container
######################################
EXPOSE 8080
######################################
# Finally run headless as server
######################################
CMD ./pharo --headless Pharo.image --no-quit
Any advice warmly welcome.
Thank you.
After a lot of testing, I managed to come further. It seems that the /bin/sh missing file thing is a red herring.
I tried to change the startup command from CMD to ENTRYPOINT, since that was mentioned in the error, but it did not work. However, when I copied the startup instruction into a new file 'startup.sh' and changed the last line of the Dockerfile to;
ENTRYPOINT ./startup.sh
It did work. I needed to chmod +x the new file of course, but the strange thing is that ENTRYPOINT ./pharo --headless Pharo.image --no-quit gave the same error, and even ENTRYPOINT ["./pharo", "--headless", "Pharo.image", "--no-quit"] also gave the same error.
But having just one argument to ENTRYPOINT made cloud run work. Go figure.
It appears that Google Cloud Run has a dislike for the ubuntu:20.04 image. I have the exact same problem with a Play framework application.
The command
ENTRYPOINT /opt/play-codecheck/bin/play-codecheck -Dconfig.file=/opt/codecheck/production.conf
failed with
error: "Invalid command \"/bin/sh\": file not found"
I also tried
ENTRYPOINT ["/bin/bash", "/opt/play-codecheck/bin/play-codecheck", "-Dconfig.file=/opt/codecheck/production.conf"]
and was rewarded with
error: "Invalid command \"/bin/bash\": file not found"
The trick of putting the command in a shell script didn't work for me either. However, when I changed
FROM ubuntu:20.04
to
FROM ubuntu:18.04
the image deployed. At this point, that's an acceptable fix for me, but it seems like something that Google needs to address.
See also:
Unable to deploy Ubuntu 20.04 Docker container on Google Cloud Run
My workaround was to use a CMD directive that calls Python directly rather than a shell (either /bin/sh or /bin/bash). It's working well so far.

CodeDeploy hooks running scripts in the agent instalation folder

So, I'm setting up my first application that uses CodeDeploy (EC2 + S3) and I'm having a very hard time to figure out how to run the scripts after instalation.
So I defined an AfterInstall hook in the AppSpec file refering to my bash script file in the project diretory. When the commands in the script run I get the error stating the files could not be found. So I put an ls command before it all and checked the logs.
My script file is running in the CodeDeploy agent folder. There are many files there that I accidentally created when I was testing but I was expecting them to be in my project root folder.
--Root
----init.sh
----requirements.txt
----server.py
appspec.yml
version: 0.0
os: linux
files:
- source: ./
destination: /home/ubuntu/myapp
runas: ubuntu
hooks:
AfterInstall:
- location: init.sh
timeout: 600
init.sh
#!/bin/bash
ls
sudo apt install python3-pip
pip3 install -r ./requirements.txt
python3 ./server.py
So when ls is executed, it doesn't list the files in my project root directory. I also tried ${PWD} instead of ./ and it didn't work. It is copying the script file to the agent folder and running it.
Refer to this https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
This is written at the end of the above document
The location of scripts you specify in the 'hooks' section is relative
to the root of the application revision bundle. In the preceding
example, a file named RunResourceTests.sh is in a directory named
Scripts. The Scripts directory is at the root level of the bundle.
But apparently it refers to the paths in the appspec file only.
Could someone help? Is it correct? I MUST use absolute paths hard-coded in the script file?
Yes correct. The script doesn't execute in the destination folder, as you might expect. You need to hard code a reference the destination directory /home/ubuntu/myapp to resolve file paths in life cycle scripts.
Use cd to change the directory first:
cd /home/ubuntu/myapp
ls
sudo apt install python3-pip
pip3 install -r ./requirements.txt
python3 ./server.py

Integrating SonarQube within AWS CodePipeline: Connection Refused

tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.
I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !

AWS Codedeploy No such file or directory

I have two problems deploying via AWS CodeDeploy.
I'm trying to deploy CodeCommit's code to an EC2 ubuntu instance.
At appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu
hooks:
ApplicationStart:
- location: scripts/ApplicationStart.sh
timeout: 300
runas: ubuntu
There are several config files that I need to place at the right place in the application before starting pm2. I also assume since I set runas in appspec.yml as ubuntu, the bash script will at /home/ubuntu.
The my /home/ubuntu has
config/ backend/ frontend/
Looks like Code deploy won't overwrite the previous deployment so if I have backend/ and frontend/ folder at the directory, it will fail at Install stage.
In the ApplicationStart.sh
#!bin/bash
sudo cp config/config1.json backend/config/config1.json
sudo cp config/config2.json backend/config/environments/development/config2.json
sudo cp config/config3.json frontend/config3.json
sudo pm2 kill
cd backend
sudo npm install
sudo pm2 start "strapi start" --name backend
cd ../frontend
sudo npm install
sudo pm2 start "npm start" --name frontend
While the ApplicationStart stage, it gives me the following error.
LifecycleEvent - ApplicationStart
Script - scripts/ApplicationStart.sh
[stderr]bash: /opt/codedeploy-agent/path/to/deployment/scripts/ApplicationStart.sh: bin/bash:
bad interpreter: No such file or directory
I run the same bash file at the /home/ubuntu. It works fine.
Question 1.
- how to run BeforeInstall.sh without the error? Is there the path problems or something else I try to do but I am not supposed to do?
Question 2.
- How can I let code deploy to overwrite the previous deployment when there are already application folders in the directory (/home/ubuntu)?
- Do I manually delete the directory at BeforeInstall stage?
You're missing a slash before bin/bash in #!bin/bash.
It should be #!/bin/bash.