I am deploying a spring boot jar in a EC2 instance from code pipeline using code build and deploy. CodeBuild is working fine and the artifact is getting uploaded in S3.
When the CodeDeploy starts, it will be in process for 2-3 mins and fail in the fist step. with this error
this is the error description
CodeDeploy service is running fine and all the other setup looks good. Also not seeing any logs in CodeDeploy logs in EC2. please help out on identifying the issue.
Here is my appspec file
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user
hooks:
AfterInstall:
- location: fix_previleges.sh
timeout: 300
runas: root
ApplicationStart:
- location: start_server.sh
timeout: 300
runas: root
ApplicationStop:
- location: stop_server.sh
timeout: 300
runas: root
Related
I'm trying to deploy code automatically with AWS and getting this error
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
On my server, I use os ubuntu 18.04. After find out I installed codedeploy-agent and now it's status is active but that error still happened.
Here is my appspec.yml
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/server-test/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server
timeout: 300
runas: root
I'd appreciate if you give some advise, thanks.
I am getting this error below. when I start deployment
Script name
scripts/server_stop.sh
Message
Script does not exist at specified location: /opt/codedeploy-agent/deployment-root/2a16dbef-8b9f-42a5-98ad-1fd41f596acb/d-2KDHE6D5F/deployment-archive/scripts/server_stop.sh
source and build are fine
this is the appspec that I am using:
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/KnCare-Rest
file_exists_behavior: OVERWRITE
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
#BeforeInstall:
#- location: scripts/server_clear.sh
#timeout: 300
#runas: ec2-use
ApplicationStart:
- location: scripsts/server_start.sh
timeout: 20
runas: ec2-user
ApplicationStop:
- location: scripsts/server_stop.sh
timeout: 20
runas: ec2-user
Two things cause this issue
If you deployed the scripts on asg or ec2 and then add additional scripts for the hook.
For this you need to restart the asg or ec2 before adding additional scripts.
If you used the same pipeline name again for a different deployment.
For this you need to delete the input artifact that is in s3 before running the deployment since aws will not delete the artifacts even if you delete the pipeline.
Did you ssh to ec2 instance to verify that scripts/server_stop.sh existed and in execute mode permissions?
I've followed all the steps of implementing the Bitbucket pipeline in order to have continuous development in AWS EC2. I've used the Code Deploy Application tool together with all configuration that needs to be done in AWS. I'm using EC2, Ubuntu and I'm trying to deploy a MEAN app.
As per bitbucket, I've added variables under "Repository variables" including:
S3_BUCKET
DEPLOYMENT_GROUP_NAME
DEPLOYMENT_CONFIG
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
and also I've added three required files:
codedeploy_deploy.py - that I've got from this link: https://bitbucket.org/awslabs/aws-codedeploy-bitbucket-pipelines-python/src/73b7c31b0a72a038ea0a9b46e457392c45ce76da/codedeploy_deploy.py?at=master&fileviewer=file-view-default
appspec.yml -
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/aok
permissions:
- object: /home/ubuntu/aok
owner: ubuntu
group: ubuntu
hooks:
AfterInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
3. **bitbucket-pipelines.yml**
mage: node:10.15.1
pipelines:
default:
- step:
script:
- apt-get update && apt-get install -y python-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python get-pip.py
- pip install awscli
- python codedeploy_deploy.py
- aws deploy push --application-name $APPLICATION_NAME --s3-location s3://$S3_BUCKET/aok.zip --ignore-hidden-files
- aws deploy create-deployment --application-name $APPLICATION_NAME --s3-location bucket=$S3_BUCKET,key=aok.zip,bundleType=zip --deployment-group-name $DEPLOYMENT_GROUP_NAME
On the Pipeline tab on Bitbucket when I am pushing the code is showing the Successful message and also in S3 when I am downloading the latest version, the changes that I pushed are there. The problem is the website is not showing the new changes, there is still the initial version that I cloned before implementing the PIPELINE.
This codedeploy_deploy.py script is not supported anymore. The recommended way is to migrate from the CodeDeploy addon to aws-code-deploy Bitbucket Pipe. There is a deployment guide from Atlassian that will help you to get started with the pipe: https://confluence.atlassian.com/bitbucket/deploy-to-aws-with-codedeploy-976773337.html
I want to have a test stage in codepipeline. to do that I create a codedeploy as a stage of codepipeline, the appspec.yml is:
version: 0.0
os: linux
files:
- source: test
destination: /mycodedeploy/test
hooks:
AfterInstall:
- location: test/run_test.sh
- timeout: 3600
the code deploy completes successfully, except I do not see test result of test/run_test.sh in AWS console.
Where can I see the test result like?
"Ran 1 test in 0.000s
OK"
?
You won't be able to see the logs from your script in the AWS console unless you configure your instance to publish the logs to CloudWatch.
You should be able to see the logs on the host here: /opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log. If you don't publish them to CloudWatch, you'll have to manually look on the host. Here's more information on CodeDeploy agent logging.
I'm using codeDeploy to deploy new code to my EC2 instances.
Here my appspec file:
version: 0.0
os: linux
files:
- source: v1
destination: /somewhere/v1
hooks:
BeforeInstall:
- location: script/deregister_from_elb.sh
timeout: 400
But I'm getting the following error:
LifecycleEvent - BeforeInstall
Script - v1/script/deregister_from_elb.sh
[stderr]Running AWS CLI with region: eu-west-1
[stderr]Started deregister_from_elb.sh at 2017-03-17 11:44:30
[stderr]Checking if instance i-youshouldnotknow is part of an AutoScaling group
[stderr]/iguessishouldhidethis/common_functions.sh: line 190: aws: command not found
[stderr]Instance is not part of an ASG, trying with ELB...
[stderr]Automatically finding all the ELBs that this instance is registered to...
[stderr]/iguessishouldhidethis/common_functions.sh: line 552: aws: command not found
[stderr][FATAL] Couldn't find any. Must have at least one load balancer to deregister from.
Any ideas why is this happening? I suspect that the message "aws: command not found" could be the issue, but I have awscli installed
~$ aws --version
aws-cli/1.11.63 Python/2.7.6 Linux/3.13.0-95-generic botocore/1.5.26
Thanks very much for your help