Codepipeline script_start failed Script does not exist at specified location - amazon-web-services

I am getting this error below. when I start deployment
Script name
scripts/server_stop.sh
Message
Script does not exist at specified location: /opt/codedeploy-agent/deployment-root/2a16dbef-8b9f-42a5-98ad-1fd41f596acb/d-2KDHE6D5F/deployment-archive/scripts/server_stop.sh
source and build are fine
this is the appspec that I am using:
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/KnCare-Rest
file_exists_behavior: OVERWRITE
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
#BeforeInstall:
#- location: scripts/server_clear.sh
#timeout: 300
#runas: ec2-use
ApplicationStart:
- location: scripsts/server_start.sh
timeout: 20
runas: ec2-user
ApplicationStop:
- location: scripsts/server_stop.sh
timeout: 20
runas: ec2-user

Two things cause this issue
If you deployed the scripts on asg or ec2 and then add additional scripts for the hook.
For this you need to restart the asg or ec2 before adding additional scripts.
If you used the same pipeline name again for a different deployment.
For this you need to delete the input artifact that is in s3 before running the deployment since aws will not delete the artifacts even if you delete the pipeline.

Did you ssh to ec2 instance to verify that scripts/server_stop.sh existed and in execute mode permissions?

Related

AWS DEPLOY: The overall deployment failed because too many individual instances failed deployment

I'm trying to deploy code automatically with AWS and getting this error
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
On my server, I use os ubuntu 18.04. After find out I installed codedeploy-agent and now it's status is active but that error still happened.
Here is my appspec.yml
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/server-test/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server
timeout: 300
runas: root
I'd appreciate if you give some advise, thanks.

AWS Codedeploy failing to deploy on EC2 without any error logs

I am deploying a spring boot jar in a EC2 instance from code pipeline using code build and deploy. CodeBuild is working fine and the artifact is getting uploaded in S3.
When the CodeDeploy starts, it will be in process for 2-3 mins and fail in the fist step. with this error
this is the error description
CodeDeploy service is running fine and all the other setup looks good. Also not seeing any logs in CodeDeploy logs in EC2. please help out on identifying the issue.
Here is my appspec file
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user
hooks:
AfterInstall:
- location: fix_previleges.sh
timeout: 300
runas: root
ApplicationStart:
- location: start_server.sh
timeout: 300
runas: root
ApplicationStop:
- location: stop_server.sh
timeout: 300
runas: root

AWS EC2 Bitbucket Pipeline is not executing the latest code deployed

I've followed all the steps of implementing the Bitbucket pipeline in order to have continuous development in AWS EC2. I've used the Code Deploy Application tool together with all configuration that needs to be done in AWS. I'm using EC2, Ubuntu and I'm trying to deploy a MEAN app.
As per bitbucket, I've added variables under "Repository variables" including:
S3_BUCKET
DEPLOYMENT_GROUP_NAME
DEPLOYMENT_CONFIG
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
and also I've added three required files:
codedeploy_deploy.py - that I've got from this link: https://bitbucket.org/awslabs/aws-codedeploy-bitbucket-pipelines-python/src/73b7c31b0a72a038ea0a9b46e457392c45ce76da/codedeploy_deploy.py?at=master&fileviewer=file-view-default
appspec.yml -
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/aok
permissions:
- object: /home/ubuntu/aok
owner: ubuntu
group: ubuntu
hooks:
AfterInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
3. **bitbucket-pipelines.yml**
mage: node:10.15.1
pipelines:
default:
- step:
script:
- apt-get update && apt-get install -y python-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python get-pip.py
- pip install awscli
- python codedeploy_deploy.py
- aws deploy push --application-name $APPLICATION_NAME --s3-location s3://$S3_BUCKET/aok.zip --ignore-hidden-files
- aws deploy create-deployment --application-name $APPLICATION_NAME --s3-location bucket=$S3_BUCKET,key=aok.zip,bundleType=zip --deployment-group-name $DEPLOYMENT_GROUP_NAME
On the Pipeline tab on Bitbucket when I am pushing the code is showing the Successful message and also in S3 when I am downloading the latest version, the changes that I pushed are there. The problem is the website is not showing the new changes, there is still the initial version that I cloned before implementing the PIPELINE.
This codedeploy_deploy.py script is not supported anymore. The recommended way is to migrate from the CodeDeploy addon to aws-code-deploy Bitbucket Pipe. There is a deployment guide from Atlassian that will help you to get started with the pipe: https://confluence.atlassian.com/bitbucket/deploy-to-aws-with-codedeploy-976773337.html

AWS can't deregister EC2 from ELB during deploy

I'm using codeDeploy to deploy new code to my EC2 instances.
Here my appspec file:
version: 0.0
os: linux
files:
- source: v1
destination: /somewhere/v1
hooks:
BeforeInstall:
- location: script/deregister_from_elb.sh
timeout: 400
But I'm getting the following error:
LifecycleEvent - BeforeInstall
Script - v1/script/deregister_from_elb.sh
[stderr]Running AWS CLI with region: eu-west-1
[stderr]Started deregister_from_elb.sh at 2017-03-17 11:44:30
[stderr]Checking if instance i-youshouldnotknow is part of an AutoScaling group
[stderr]/iguessishouldhidethis/common_functions.sh: line 190: aws: command not found
[stderr]Instance is not part of an ASG, trying with ELB...
[stderr]Automatically finding all the ELBs that this instance is registered to...
[stderr]/iguessishouldhidethis/common_functions.sh: line 552: aws: command not found
[stderr][FATAL] Couldn't find any. Must have at least one load balancer to deregister from.
Any ideas why is this happening? I suspect that the message "aws: command not found" could be the issue, but I have awscli installed
~$ aws --version
aws-cli/1.11.63 Python/2.7.6 Linux/3.13.0-95-generic botocore/1.5.26
Thanks very much for your help

Elastic Beanstalk Single Container Docker - use awslogs logging driver

I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?
I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config
I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart